SlideShare a Scribd company logo
1 of 30
Download to read offline
CHAPTER 14



                              Digital Manipulation of Brightfield and
                              Fluorescence Images: Noise Reduction,
                              Contrast Enhancement, and
                              Feature Extraction
                              Richard A. Cardullo* and Edward H. HinchcliVe†
                              *Department of Biology, The University of California
                              Riverside, California 92521
                              †
                               Department of Biological Sciences
                              University of Notre Dame, Notre Dame, Indiana 46556




                                  I.
                                   Introduction
                                 II.
                                   Digitization of Images
                                III.
                                   Using Gray Values to Quantify Intensity in the Microscope
                                IV.Noise Reduction
                                       A. Temporal Averaging
                                       B. Spatial Methods
                               V. Contrast Enhancement
                              VI. Transforms, Convolutions, and Further Uses for Digital Masks
                                       A. Transforms
                                       B. Convolution
                                       C. Digital Masks as Convolution Filters
                              VII. Conclusions
                                   References




                              I. Introduction

                                The theoretical basis of image processing along with its applications is an
                              extensive topic that cannot be adequately covered here but has been presented in
                              a number of texts dedicated exclusively to this topic (Andrews and Hunt, 1977;
METHODS IN CELL BIOLOGY, VOL. 81                                                                               0091-679X/07 $35.00
Copyright 2007, Elsevier Inc. All rights reserved.                   285                         DOI: 10.1016/S0091-679X(06)81014-9
286                                                Richard A. Cardullo and Edward H. HinchcliVe


      Bates and McDonnell, 1986; Chellappa and Sawchuck, 1985; Gonzalez and Wintz,
      1987; Inoue and Spring, 1997; Russ, 1994; Shotton, 1993). In this chapter, the basic
      principles of image processing used routinely by microscopists will be presented.
      Since image processing allows the investigator to convert the microscope/detector
      system into a quantitative device, this chapter will focus on three basic problems:
      (1) reducing ‘‘noise,’’ (2) enhancing contrast, and (3) quantifying intensity of an
      image. These techniques can then be applied to a number of diVerent methodolo-
      gies such as video-enhanced diVerential interference microscopy (VEDIC; Chapter
      16 by Salmon and Tran, this volume), nanovid microscopy, fluorescence recovery
      after photobleaching, fluorescence correlation spectroscopy, fluorescence reso-
      nance energy transfer, and fluorescence ratio imaging (Cardullo, 1999). In all
      cases, knowledge of the basic principles of microscopy, image formation, and
      image-processing routines is absolutely required to convert the microscope into a
      device capable of pushing the limits of resolution and contrast.


      II. Digitization of Images

         An image must first be digitized before an arithmetic, or logical, operation can
      be performed on it (Pratt, 1978). For this discussion, a digital image is a discrete
      representation of light intensity in space (Fig. 1). A particular scene can be viewed
      as being continuous in both space and light intensity and the process of digitization
      converts these to discrete values. The discrete representation of intensity is com-
      monly referred to as gray values whereas the discrete representation of position
      is given as picture elements, or pixels. Therefore, each pixel has a corresponding
      gray value which is related to light intensity [e.g., at each coordinate (x,y) there is a
      corresponding gray value designated as GV(x,y)]. The key to digitizing an image is
      to provide enough pixels and grayscale values to adequately describe the original
      image.
         Clearly, the fidelity of reproduction between the true image and the digitized
      image depends on both the spacing between the pixels (e.g., the number of bits that
      map the image) and the number of gray values used to describe the intensity of that
      image. Figure 1B shows a theoretical one-dimensional scan across a portion of an
      image. Note that the more pixels used to describe, or sample, an image, the better
      the digitized image reflects the true nature of the original. Conversely, as the
      number of pixels is progressively reduced, the true nature of the original image is
      lost. When choosing the digitizing device for a microscope, particular attention
      must be paid to matching the resolution limit of the microscope ($0.2 mm for
      visible light, see Chapter 1 by Sluder and Nordberg, this volume) to the resolution
      limit of the digitizer (Inoue, 1986). A digitizing array that has an eVective separa-
      tion of 0.05 mm per pixel is, at best, using four pixels to describe resolvable objects
      in a microscope resulting in a highly digitized representation of the original image
      (note that this is most clearly seen when using the digitized zoom feature of many
      imaging devices which results in a ‘‘boxy’’ image representation). In contrast, a
14. Digital Manipulation of Brightfield and Fluorescence Images                                                                              287

                          A                                                           B
                                        60                                                            60


                                        50                                                            50




                                                                                         Gray value
                          Intensity
                                        40                                                            40


                                        30                                                            30


                                        20                                                            20
                                             0   2     4      6    8      10                               0   5       10 15 20 25 30
                                                     Position (mm)                                                     Pixel number
                           C                                                    D
                                        60
                                                                                              60

                                        50
                                                                                              50
                           Gray value




                                                                                Gray value
                                        40                                                    40

                                        30                                                    30


                                        20                                                    20
                                             0       4         8     12    16                          0           2         4      6   8
                                                         Pixel number                                                  Pixel number

                   Fig. 1 (A) A densitometric line scan through a microscopic image is described by intensity values on
                   the y-axis and its position along the x-axis. (B) A 6-bit-digitized representation (64 gray values) of the
                   object in (A), with 32 bits used to describe the position across 10 mm. The digital representation captures
                   the major details of the original object but some finer detail is lost. Note that the image is degraded
                   further when the position is described by only (C) 16 bits or (D) 8 bits.




                   digitizer which has pixel elements separated by 1 mm eVectively averages gray
                   values five times above the resolution limit of the microscope resulting in a
                   degraded representation of the original image.
                      In addition to assigning the number of pixels for an image, it is also important
                   to know the number of gray values needed to faithfully represent the intensity of
                   that image. In Fig. 1B, the original image has been digitized at 6-bit resolution
                   (6 bits ¼ 26 ¼ 64 gray values from 0 to 63). The image could be better described by
                   more gray values (e.g., 8 bits ¼ 256 gray levels) but would be poorly described
                   by less gray values (e.g., 2 bits ¼ 4 gray values).
                      The decision on how many pixels and gray values are needed to describe an
                   image is dictated by the properties of the original image. Figure 1 represents a low-
                   contrast, high-resolution image which needs many gray scales and pixels to
                   adequately describe it. However, some images are by their very nature high
288                                                 Richard A. Cardullo and Edward H. HinchcliVe


      contrast and low resolution and require less pixels and gray values to describe it
      (e.g., a line drawing may require only 1 bit of gray-level depth, black or white).
      Ultimately, the trade-oV is one in contrast, resolution, and speed of processing.
      The more descriptors used to represent an image, the slower the processing
      routines will be performed. In general, an image should be described by as few
      pixels and gray values as needed so that speed of processing can be optimized. For
      many applications, the user can select a narrower window, or region of interest
      (ROI), within the image to speed up processing.


      III. Using Gray Values to Quantify Intensity in the Microscope

         A useful feature shared by all image processors is that they allow the microsco-
      pist a way to quantify image intensity values into some meaningful parameter
      (Green, 1989; Russ, 1990). In standard light microscopy, the light intensity—and
      therefore the digitized gray values—is related to the optical density (OD) which is
      proportional to the log of the relative light intensity. In dilute solutions (i.e., in the
      absence of significant light scattering), the OD is proportional to the concentra-
      tion of absorbers, C, the molar absorptivity, e, and the path length, l, through the
      vessel containing the absorbers. In such a situation, the OD is related to these
      parameters using Beer’s law:
                                                
                                                I0
                                       OD ¼ log    ¼ eCl
                                                I

      where I and I0 are the intensities of light in the presence and absence of absorber,
      respectively. Within dilute solutions, it therefore might be possible to equate a
      change in OD with changes either in molar absorptivity, path length, or concen-
      tration. However, with objects as complex as cells, all three parameters can vary
      tremendously and the utility of using OD to measure a change in any one
      parameter is diYcult.
         Although diYcult to interpret in cells, measuring changes in digitized gray
      values in an OD wedge oVers the investigator a good way to calibrate an entire
      microscope system coupled to an image processor. Figure 2 shows such a calibra-
      tion using a brightfield microscope coupled to a CCD camera and an image
      processor. The wedge had 0.15-OD increments. The camera/image processor
      unit was digitized to 8 bits (0–255) and the median gray value was recorded for
      a 100 Â 100 pixel array (the ROI) in each step of the wedge. In this calibration, the
      black level of the camera was adjusted so that the highest OD corresponded to a
      gray value of 5. At the other end of the scale (the lowest OD used), the relative
      intensity was normalized so that I/I0 was equal to 1 and the corresponding gray
      value was $95% of the maximum gray value ($243). As seen in Fig. 2, as the step
      wedge is moved through the microscope, the median value of the gray value
      increased as the log of I/I0. In addition to acting as a useful calibration, this figure
14. Digital Manipulation of Brightfield and Fluorescence Images                                                    289

                                                 200


                                                 100
                                                 70




                                    Gray value
                                                 50

                                                 30
                                                 20


                                                 10


                                                  5
                                                       0.03   0.1               0.3                1
                                                                Relative intensity

                   Fig. 2 Calibration of a detector using an image processor. The light intensity varied incrementally
                   using an OD step wedge (0.15-OD increments), and the gray value was plotted as a function of the
                   normalized intensity. In this instance the camera/image processor system was able to quantify diVer-
                   ences in light intensity over a 40-fold range.



                   shows that an 8-bit processor can reliable quantify changes in light intensity over
                   two orders of magnitude.


                   IV. Noise Reduction
                      The previous sections have assumed that the object being imaged is relatively free
                   of noise and is of suYcient contrast to generate a usable image. Although this may
                   be true in some instances, the ultimate challenge in many applications is to obtain
                   reliable quantitative information from objects which produce a low-contrast, noisy
                   signal (Erasmus, 1982). This is particularly true in cell physiological measurements
                   using specialized modes of microscopy such as VEDIC, fluorescence ratio imaging,
                   nanovid microscopy, and so on. There are diVerent ways to reduce noise and the
                   methods of noise reduction chosen depend on many diVerent factors, including the
                   source of the noise, the type of camera employed for a particular application, and
                   the contrast of the specimen. For the purposes of this chapter, we shall distinguish
                   between temporal and spatial techniques to increase the signal-to-noise ratio (SNR)
                   of an image.

A. Temporal Averaging
                     In most low-light level applications, there is considerable amount of shot noise
                   associated with the signal. If quantitation is needed, it is often necessary to reduce
                   the amount of shot noise in order to improve the SNR. Because this type of noise
                   reduction requires averaging over a number of frames (!2 frames), this method is
290                                                Richard A. Cardullo and Edward H. HinchcliVe


      best used for static objects. Clearly, temporal averaging can be diYcult to use for
      optimizing contrast for dynamic processes such as cell movement, detecting rapid
      changes in intracellular ion concentrations over time, quantifying molecular mo-
      tions using fluorescence recovery after photobleaching, or single particle tracking.
      The trade-oV is between improving SNR versus blurring or missing the capture of
      a dynamic event. Current digital microscopy equipment allows for very short
      exposure times, even with the low-light levels associated with live cell imaging.
      Thus, frame averaging can be an acceptable solution to improve SNR, provided
      that the light exposures are suYciently short (Fig. 3).
         Assume that at any given time, t, within a given pixel, a signal, Si(t), represents
      both the true image, I, which may be inclusive of background, and some source of
      noise, Ni(t). Since the noise is stochastic in nature, Ni(t) will vary in time, taking on
      both positive and negative values, and the signal, Si(t), will vary about some mean
      value. For each frame, the signal is therefore just:

                                         Si ðtÞ ¼ I þ Ni ðtÞ

      As the signal is averaged over M frames, an average value for Si(t) and Ni(t) is
      obtained:
                                        hSi iM ¼ I þ hNi iM

      where hSiiM and hNiiM represent the average value of Si(t) and Ni(t) over M
      frames. As the number of frames, M, goes to infinity, the average value of Ni
      goes to zero and therefore:
                                           hSi iM!1 ¼ I

      The question facing the microscopist is how large M should be so the SNR is
      acceptable. This is determined by a number of factors including the magnitude of
      the original signal, the amount of noise, and the degree of precision required by
      the particular quantitative measurement. A quantitative measure of noise reduc-
      tion can be obtained by looking at the standard deviation of the noise, which
      decreases inversely as the square root of the number of frames (sM ¼ s0/√M).
      Therefore, averaging a field for 4 frames will give a 2-fold improvement in the
      SNR, averaging for 16 frames yields a 4-fold improvement, while averaging for
      256 frames yields a 16-fold improvement. At some point the user obviously
      reaches a point of diminishing returns where the noise level is below the resolution
      limit of the digitizer and any improvement in SNR is minimal (Fig. 4).
         Although frame-averaging techniques are not always appropriate for moving
      objects, it is possible to apply a running average where the resulting image is a
      weighted sum of all previous frames. Because the image is constantly updated on a
      frame-by-frame basis, these types of recursive techniques are useful for following
      moving objects but improvement in SNR is always less than that obtained with the
      simple averaging technique outlined in the previous paragraph (Erasmus, 1982).
No frame averaging                            4ϫ frame averaging                           8ϫ frame averaging

Fig. 3 Frame averaging improves the SNR. A dividing mammalian cell expressing GFP-a tubulin was imaged using a spinning disk
confocal microscope. Images were collected with no frame averaging, 4 frames averaged or 8 frames averaged. Random noise is reduced due
to frame averaging.
292                                                                      Richard A. Cardullo and Edward H. HinchcliVe


                                         1.0                   1.0

                                                               0.8

                                         0.8                   0.6

                                                               0.4

                                         0.6                   0.2




                                 Noise
                                                               0.0
                                                                     0    2    4   6   8 10 12 14 16
                                         0.4



                                         0.2



                                         0.0
                                               0   32   64    96         128   160     192   224   256
                                                        Number of frames averaged

               Fig. 4 Reduction in noise as a function of the numbers of frames averaged. The noise is reduced
               inversely as the square root of the number of frames averaged. In this instance, the noise was normalized
               to the average value obtained for a single frame. The major gain in noise reduction is obtained after
               averaging very few frames (inset) and averaging for more than 64 frames leads to only minor gains in the
               SNR.




               Additional recursive filters are possible which optimize the SNR but these are
               typically not available on commercial image processors.




B. Spatial Methods
                  A number of spatial techniques are available which allow the user to reduce
               noise on a pixel-by-pixel basis. The simplest of these techniques generally uses
               simple arithmetic operations within a single frame or, alternatively, between two
               diVerent frames. In general, these types of routines involve either image subtrac-
               tion or division from a background image or calculate a mean or median value
               around the neighborhood of a particular pixel. More sophisticated methods use
               groups of pixels (known as masks, kernels, or filters), which perform higher-order
               functions to extract particular features from an image. These types of techniques
               will be discussed separately in Section VI.
14. Digital Manipulation of Brightfield and Fluorescence Images                                                                          293

1. Arithmetic Operations Between an Object and a Background Image
                      If one has an image, which has a constant noise component in a given pixel in
                   each frame, the noise component can be removed by performing a simple subtrac-
                   tion that removes the noise and optimizes the SNR. Although SNR is improved,
                   subtraction methods can also significantly decrease the dynamic range but these
                   problems can generally be avoided when the microscope and camera systems are
                   adjusted to give the optimal signal.
                      Any constant noise component can be removed by subtraction and, in general,
                   it is always best to subtract the noise component from a uniform background
                   image (Fig. 5). Thus, if a pixel within an image has a gray value of say 242, with
                   the background having a gray value of 22 in that same pixel, then a simple

                         A                                                               B
                                    250
                                                                                                    250

                                    200                                                             200
                       Gray value




                                                                                       Gray value
                                    150                                                             150

                                    100                                                             100

                                    50                                                               50

                                     0                                                                0
                                          0   10   20 30 40 50                    60                      0   10   20 30 40 50     60
                                                    Pixel number                                                    Pixel number

                                                        C
                                                                   250

                                                                   200
                                                      Gray value




                                                                   150

                                                                   100

                                                                    50

                                                                     0
                                                                         0   10   20 30 40                    50   60
                                                                                   Pixel number

                   Fig. 5 Subtracting noise from an image. (A) Line scan across an object and the surrounding
                   background. (B) Line scan across the background alone reveals variations in intensity that may be
                   due to uneven light intensity across the field, camera defects, dirt on the optics, and so on. (C) Image in
                   (A) subtracted from image in (B). The result is a ‘‘cleaner’’ image with a higher SNR in the processed
                   image compared to the original in (A).
294                                                                  Richard A. Cardullo and Edward H. HinchcliVe


               subtraction would yield a resultant value of 220. Image subtraction therefore
               preserves the majority of the signal, and the subtracted image can then be pro-
               cessed further using other routines. In order to reduce temporal noise, both images
               can be first averaged as described in Section IV.A.


2. Concept of a Digital Mask
                 A number of mathematical manipulations of images involve using an array (or a
               digital mask) around a neighborhood of a particular pixel. These digital masks can
               be used either to select particular pixels from the neighborhood (as in averaging or
               median filtering discussed in Section IV.B.3) or alternatively can be used to apply
               some mathematical weighting function to an image on a pixel-by-pixel basis to
               extract particular features from that image (discussed in detail in Section VI). When
               the mask is overlaid on an image, the particular mathematical operation is per-
               formed and the resultant value is placed into the same array position and the
               operation is performed repeatedly until the entire image has been transformed.
               Although a digital mask can take on any shape, the most common masks are square
               with the center pixel being the particular pixel being operated on at any given time
               (Fig. 6). The most common masks are 3 Â 3 or 5 Â 5 arrays so that only the nearest
               neighbors will have an eVect on the pixel being operated on. Additionally, larger
               arrays will greatly increase the number of computations that need to be performed
               which can significantly slow down the rate of processing a particular image.




                                          px − 1,y + 1         px,y + 1         px + 1,y + 1




                                            px − 1,y             px,y             px + 1,y




                                          px − 1,y − 1         px,y − 1         px + 1,y − 1



               Fig. 6 Digital mask used for computing means, averages, and higher-order mathematical operations,
               especially convolutions. In the case of the median and average filters, the mask is overlaid over each
               pixel in the image and the resultant value is calculated and placed into the identical pixel location in a
               new image buVer.
14. Digital Manipulation of Brightfield and Fluorescence Images                                                                          295

3. Averaging Versus Median Filters
                       When an image contains random and infrequent intensity spikes with particular
                     pixels, a digital mask can be used around each pixel to remove them. Two
                     common ways to remove these intensity spikes are to calculate either the average
                     value or the median value within the neighborhood and assign that value to the
                     center pixel in the processed image (Fig. 7). The choice of filter used will depend on
                     the type of processed image that is desired. Although both types of filters will




                 A                                                                                         200
                     23     25    26        192   27     47        52     56       55    59
                                                                                                           150




                                                                                              Gray value
                     26     27    23        27    135    42        55     116      52    61
                     117    26    28        29    31     175       52     56       57    59
                     25     28    19        31    186    49        5      57       56    52                100
                     29     27    178       57    30     51        195    7        55    57
                     26     22    26        42    12     46        57     55       142   54                    50

                                                                                                                   0
                                                                                                                        2   4      6    8
                                                                                                                            Position
                                                                                                           100
                                                                                                               80




                                                                                                  Gray value
                                     B
                                            36    45    58    78         68   72    61   63                    60
                                            35    26    57    78         81   67    56   63                    40
                                            53    47    65    71         86   72    60   51
                                                                                                               20
                                            42    48    65    56         71   58    70   59
                                                                                                                   0
                                                                                                                        2   4      6     8
                                                                                                                            Position
                                                                                                               100
                                        C
                                                                                                                   80
                                                                                                      Gray value




                                             26    27    28    42        52   55    55   57
                                             27    27    29    42        52   56    56   57                        60
                                             28    28    31    49        51   52    56   56                        40
                                             26    28    31    46        49   51    56   55
                                                                                                                   20
                                                                                                                   0
                                                                                                                        2    4      6    8
                                                                                                                             Position

                     Fig. 7 Comparison of 3 Â 3 averaging and median filters to reduce noise. (A) Digital representation of
                     an image displaying gray values at diVerent pixel locations. In general, the object possesses a boundary
                     which is detected as a line scan from left to right. However, the image has a number of intensity spikes
                     which significantly mask the true boundary. A line scan across a particular row (row denoted by arrow
                     and scan is on the right-hand side) reveals both high- and low-intensity values which greatly distort the
                     image. (B) When a 3 Â 3 averaging filter is applied to the image, the extreme intensity values are
                     significantly reduced but the image is smoothed in the vicinity of the boundary. (C) In contrast, a 3 Â 3
                     median filter removes the extreme intensity values while preserving the true nature of the boundary.
296                                                    Richard A. Cardullo and Edward H. HinchcliVe


      degrade an image, the median filter preserves edges better than the averaging filter
      since all values with the digital mask are used to compute the mean. Further,
      averaging filters are seldom used to remove intensity spikes because the spikes
      themselves contribute to the new intensity value in the processed image and
      therefore the resultant image is blurred.
         The median filter is more desirable for removing infrequent intensity spikes
      from an image since those intensity values are always removed from the processed
      image once the median is computed. In this case, any spike is replaced with the
      median value within the digital mask, which gives a more uniform appearance to
      the processed image. Hence, a uniform background that contains infrequent
      intensity spikes will look absolutely uniform in the processed image. Since the
      median filter preserves edges (a sharpening filter), it is often used for high-contrast
      images (Fig. 8).




      Fig. 8 EVects of median sharpen and smooth filters on image contrast. A dividing mammalian cell
      expressing GFP-a tubulin was imaged using a spinning disk confocal microscope. After collection,
      separate digital filters were applied to the image.
14. Digital Manipulation of Brightfield and Fluorescence Images                                    297

                   V. Contrast Enhancement

                      One of the most common uses of image processing is to digitally enhance the
                   contrast of the image using a number of diVerent methods (Castleman, 1979).
                   In brightfield modes such as phase contrast or diVerential interference contrast,
                   the addition of a camera and an image processor can significantly enhance the
                   contrast so that specimens with inherent low contrast can be observed. Addition-
                   ally, contrast routines can be used to enhance an image in a particular region,
                   which may allow the investigator to quantify structures or events not possible with
                   the microscope alone. This is the basis for VEDIC, which allows, for example, the
                   motion of low-contrast specimens such as microtubules or chromosomes to be
                   quantified (Chapter 16 by Salmon and Tran, this volume).
                      In order to optimize contrast enhancement digitally, it is imperative that the
                   microscope optics and the camera be adjusted so that the full dynamic range of the
                   system is utilized. This is discussed further in Chapter 17 by Wolf et al., this
                   volume. The gray values of the image and background can then be displayed as a
                   histogram (Fig. 9) and the user is then able to adjust the brightness and contrast
                   within a particular region of the image. Within a particular gray value range, the
                   user can then stretch the histogram so that values within that range are spread out
                   over a diVerent range in the processed image. Although this type of contrast
                   enhancement is artificial, this allows the user to discriminate features which
                   otherwise may not have been detectable by eye in the original image.
                      Stretching gray values over a particular range in an image is one type of
                   mathematical manipulation, which can be performed on a pixel-by-pixel basis.
                   In general, any digital image can be mathematically manipulated to produce an
                   image with diVerent gray values. The user-defined function that transforms the
                   original function is known as the image transfer function (ITF), which specifies the
                   value and the mathematical operation that will be performed on the original
                   image. This type of operation is a point operation, which means that the output
                   gray value of the ITF is dependent only on the input gray value on a pixel-by-pixel
                   basis. The gray values of the processed image, I2, are therefore transformed
                   at every pixel location relative to the original image using the same ITF. Hence,
                   every gray value in the processed image is transformed according the generalized
                   relationship:

                                                          GV2 ¼ f ðGV1 Þ

                   where GV2 is the gray value at every pixel location in the processed image, GV1 is
                   the input gray value of the original image, and f(GV1) is the ITF acting on the
                   original image.
                     The simplest type of ITF is a linear equation of slope m and intercept b:

                                                        GV2 ¼ mGV1 þ b
298                                                                                      Richard A. Cardullo and Edward H. HinchcliVe

      A                                                                                       B
                          3000                                                                                3000
                          2500                                                                                2500




       Number of pixels




                                                                                           Number of pixels
                          2000                                                                                2000
                          1500                                                                                1500
                          1000                                                                                1000
                          500                                                                                 500
                            0                                                                                   0
                                 0   50                      100 150         200   250                               0   50   100 150 200   250
                                                             Gray value                                                       Gray value


                                            C
                                                             3000
                                                             2500
                                          Number of pixels




                                                             2000
                                                             1500
                                                             1000
                                                             500
                                                                0
                                                                    0   50    100 150 200                      250
                                                                              Gray value

      Fig. 9 Histogram representation of gray values for an entire image. (A) The image contains two
      distributions of intensity over the entire gray value range (0–255). (B) The lower distribution can be
      removed either through subtraction (if lower values are due to a uniform background) or by applying
      the appropriate ITF which assigns a value of 0 to all input pixels having a gray value less than 100. The
      resulting distribution contains only information from input pixels with a value greater than 100.
      (C) The histogram of the higher distribution can be stretched to fill the lower gray values resulting in
      a lower mean value than the original.



         In this case, the digital contrast of the processed image is linearly transformed
      with the brightness and contrast determined by both the value of the slope and
      intercept chosen. In the most trivial case, choosing values of m ¼ 1 and b ¼ 0
      would leave all gray values of the processed image identical to the original image
      (Fig. 10A). Raising the value of the intercept while leaving the slope unchanged
      would have the eVect of increasing all gray values by some fixed value (identical to
      increasing the DC or black level control on a camera). Similarly, decreasing the
      value of the intercept will produce a darker image than the original. The value of
      the slope is known as the contrast enhancement factor and changes in the value of m
      will have significant eVects on how the gray values are distributed in an image.
      A value of m  1 will have the eVect of spreading out the gray values over a wider
      range in the processed image relative to the original image. Conversely, values of
      m  1 will reduce the number of gray values used to describe a processed image
14. Digital Manipulation of Brightfield and Fluorescence Images                                                               299

                           A
                                      250                                                   250

                                      200                                                   200




                         Gray value
                                      150
                                                                                            150




                                                                                      GV2
                                      100
                                                                                            100
                                       50

                                        0                                                      50

                                                                                                0

                          B                                                                    250
                                      250

                                      200                                                      200
                       Gray value




                                      150                                                      150




                                                                                         GV2
                                      100                                                      100

                                       50                                                       50

                                        0                                                           0

                           C
                                      250                                                250

                                      200                                                200
                         Gray value




                                      150                                                150
                                                                                   GV2




                                      100                                                100

                                      50                                                  50

                                       0                                                    0
                                            0   10   20    30   40   50   60                    0       50   100 150 200 250
                                                      Pixel number                                              GV1

                   Fig. 10 Application of diVerent linear ITFs to a low-intensity, low-contrast image. (A) Intensity line
                   scan through an object which is described by few gray values. Applying a linear ITF with m ¼ 1 and b ¼ 0
                   (right) will result in no change from the initial image. (B) Applying a linear ITF with m ¼ 5 and b ¼ 0 (right)
                   leads to significant improvement in contrast. (C) Applying a linear ITF with m ¼ 2 and b ¼ 50 (right)
                   slightly improves contrast and increases the brightness of the entire image.


                   relative to the original (Fig. 10). As noted by Inoue (1986), although linear ITFs
                   can be useful, the same eVects can be best achieved by properly adjusting the
                   camera’s black levels and gain controls. However, this may not always be practical
                   if conditions under the microscope are constantly changing or if this type of
                   contrast enhancement is needed after the original images are stored.
300                                                             Richard A. Cardullo and Edward H. HinchcliVe


        The ITF is obviously not restricted to linear functions, and nonlinear ITFs can be
      extremely useful for enhancing particular features of an image while eliminating or
      reducing others (Fig. 11). Nonlinear ITFs are also useful for correcting sources of
      nonlinear response in an optical system or to calibrate the light response of an



            A
                       250                                                    250

                       200                                                    200
         Gray value




                       150
                                                                              150




                                                                        GV2
                       100
                                                                              100
                        50

                         0                                                        50

                                                                                   0
             B                                                                250
                       180
                       160                                                    200
                       140
          Gray value




                       120                                                    150


                                                                            GV2
                       100
                                                                              100
                        80
                        60                                                        50
                        40
                        20                                                            0
           C
                       250                                                  250

                       200                                                  200
         Gray value




                       150                                                  150
                                                                      GV2




                       100                                                  100

                        50                                                   50

                         0                                                    0
                             0   10   20    30   40   50   60                     0       50   100 150 200 250
                                       Pixel number                                               GV1

      Fig. 11 Application of diVerent nonlinear ITFs to the same low-intensity, low-contrast image in
      Fig. 8. (A) Initial image and ITF (right) resulting in no change. (B) Application of a hyperbolic ITF
      (right) to the image results in amplification of lower input values and only slightly increases the gray
      values for higher input values. (C) Application of a Gaussian ITF (right) to the image results in
      amplification of low values, with an oVset, and minimizes input values beyond 100.
14. Digital Manipulation of Brightfield and Fluorescence Images                                                          301

                   optical system (Inoue, 1986). The actual form of the ITF, being linear or nonlinear,
                   is generally application dependent and user defined. For example, nonlinear ITFs
                   that are sigmoidal in shape are useful for enhancing images, which compress the
                   contrast in the center of the histogram and increase contrast in the tail regions of the
                   histogram. This type of enhancement would be useful for images where most of
                   the information about the image is in the tails of the histogram while the central
                   portion of the histogram contains mostly background information. One type of
                   nonlinear ITF, which is sigmoidal in shape, will enhance an 8-bit image of this type
                   is given by the equation:
                                                                        128
                                    GV2 ¼                                      ½ðb À cÞa À ðb À GV1 Þa þ ðGV1 þ cÞa Š
                                                                      ðb À cÞa
                   where b and c are the maximum and minimum gray values for the input image,
                   respectively and are arbitrary contrast enhancement factors (Inoue, 1986).
                   For values of a ¼ 1, this normally sigmoidal ITF becomes linear with a lope of
                   256(b À c). As a increases beyond 1, the ITF becomes more sigmoidal in nature
                   with greater compression occurring at the middle gray values.
                     In practice, ITFs are generally calculated in memory using a lookup table
                   (LUT). An LUT represents the transformation that is performed on each pixel
                   on the basis of that intensity value (Figs. 12 and 13). In addition to LUTs, which



                                                            1.0

                                                                                  A
                                                            0.8
                                    Input light intensity




                                                            0.6
                                                                                  B


                                                            0.4



                                                            0.2
                                                                                  C                 D


                                                            0.0
                                                                  0          50       100        150      200    250
                                                                                      Output gray value

                   Fig. 12 Some diVerent gray value LUTs used to alter contrast in images. (A) Inverse LUT,
                   (B) logarithmic LUT, (C) square root LUT, (D) square LUT, and (E) exponential LUT. Pseudo-
                   color LUTs would assign diVerent colors instead of gray values.
302                                                     Richard A. Cardullo and Edward H. HinchcliVe


                          A




                          B




                          C




                          D




      Fig. 13 DiVerent LUTs applied to the image of a check cell. (A) No filter, (B) reverse contrast LUT,
      (C) square root LUT, (D) pseudo-color LUT.
14. Digital Manipulation of Brightfield and Fluorescence Images                                    303

                   perform particular ITFs, LUTs are also useful for pseudo-coloring images where
                   particular user-defined colors represent gray values in particular ranges. This is
                   particularly useful in techniques such as ration imaging where color LUTs are
                   used to represent concentrations of Ca2þ, pH, or other ions, when various indicator
                   dyes are employed within cells.


                   VI. Transforms, Convolutions, and Further Uses for Digital Masks

                      In the previous sections, the most frequently used methods for enhancing
                   contrast and reducing noise using temporal methods, simple arithmetic opera-
                   tions, and LUTs were described. However, more advanced methods are often
                   needed to extract particular features from an image which may not be possible
                   using these simple methods (Jahne, 1991). In this section, some of the concepts
                   and applications associated with transforms and convolutions will be introduced.


A. Transforms
                      Transforms take an image from one space to another. Probably, the most used
                   transform is the Fourier transform which takes one from coordinate space to
                   spatial frequency space (see Chapter 2 by Wolf, this volume for a discussion of
                   Fourier transforms). In general, a transform of a function in one dimension has
                   the form:
                                                         X
                                                TðuÞ ¼      f ðxÞgðx; uÞ
                                                                 x

                   where T(u) is the transform of f(x) and g(x,u) is known as the forward
                   transformation kernel. Similarly, the inverse transform is given by the relation:
                                                        X
                                                f ðxÞ ¼     TðuÞhðx; uÞ
                                                                 u

                   where h(x,u) is the inverse transformation kernel. In two dimensions, these
                   transformation pairs simply become:

                                                           XX
                                              Tðu; vÞ ¼              f ðx; yÞgðx; y; u; vÞ
                                                           x     y

                                                           XX
                                              f ðx; yÞ ¼             Tðu; vÞhðx; y; u; vÞ
                                                           u     v


                   It is the kernel functions that provide the link, which brings a function from one
                   space to another. The discrete forms shown above suggest that these operations can
                   be performed on a pixel-by-pixel basis and many transforms in image processing
304                                                  Richard A. Cardullo and Edward H. HinchcliVe


      are computed in this manner (known as a discrete Fourier transform or DFT).
      However, DFTs are generally approximated using diVerent algorithms to yield a
      fast Fourier transform, or FFT.
        In the Fourier transform, the forward transformation kernel is:
                                                     1 À2piux
                                        gðx; uÞ ¼      e
                                                     N
      and the reverse transformation kernel is:

                                                     1 þ2piux
                                        f ðx; uÞ ¼     e
                                                     N

      Hence, a Fourier transform is achieved by multiplying a digitized image pixel-by-
      pixel whose gray value is given by f(x,y) by the forward transformation kernel
      given above. Transforms, and in particular Fourier transforms, can make certain
      mathematical manipulations of images considerably easier than if they were
      performed in coordinate space directly.
         One example where conversion to frequency space using an FFT is useful is in
      identifying both high- and low-frequency components on an image that allows one
      to make quantitative choices about information that can be either used or dis-
      carded. Sharp edges and many types of noise will contribute to the high-frequency
      content of an image’s Fourier transform. Image smoothing and noise removal can
      therefore be achieved by attenuating a range of high-frequency components in the
      transform range. In this case, a filter function, F(u,v), is selected that eliminates the
      high-frequency components of that transformed image, I(u,v). The ideal filter
      would simply cut oV all frequencies about some threshold value, I0 (known as
      the cutoV frequency):

                                 F ðu; vÞ ¼ 1   if    jIðu; vÞj I0
                                 F ðu; vÞ ¼ 0   if    jIðu; vÞj  I0

      The absolute value brackets refer to the fact that these are zero-phase shift filters
      because they do not change the phase of the transform. A graphical representation
      of an ideal low-pass filter is shown in Fig. 14. Just as an image can be blurred by
      attenuating high-frequency components using a low-pass filter, so they can be
      sharpened by attenuating low-frequency components (Fig. 14). In analogy to the
      low-pass filter, an ideal high-pass filter has the following characteristics:

                                 F ðu; vÞ ¼ 0   if    jIðu; vÞj I0
                                 F ðu; vÞ ¼ 1   if    jIðu; vÞj  I0

      Although useful, Fourier transforms can be computationally intense and are still
      not routinely used in most microscopic applications of image processing.
      A mathematically related technique known as convolution, which utilizes digital
14. Digital Manipulation of Brightfield and Fluorescence Images                                                          305

                                                Low pass                                 High pass
                                     A                                        B
                                     F (u,v )                                 F (u,v )
                                      1                                           1



                                      0                            I (u,v )       0                  I(u,v )
                                                   I0                                       I0

                   Fig. 14 Frequency domain cutoV filters. The filter function in frequency space, F(u,v), is used to cut
                   oV all frequencies above or below some cutoV frequency, I0. (A) A high-pass filter attenuates all
                   frequencies below I0 leading to a sharpening of the image. (B) A low-pass filter attenuates all frequencies
                   above I0 which eliminates high-frequency noise but leads to smoothing or blurring of the image.



                   masks to select particular features of an image, is the preferred method of
                   microscopists since many of these operations can be performed at faster rates
                   and perform the mathematical operation in coordinate space instead of frequency
                   space. These operations are outlined in Section VI.B.



B. Convolution
                      The convolution of two functions, f(x) and g(x), is given mathematically by:
                                                          Z   þ1
                                                                     f ðaÞgðx À aÞda
                                                              À1

                   where a is a dummy variable of integration. It is easiest to visualize the mechanics
                   of convolution graphically as demonstrated in Fig. 15, which, for simplicity,
                   shows the convolution for two square pulses. The convolution can be broken
                   down into three simple steps:
                      1. Before carrying out the integration, reflect g(a) about the origin, yielding
                         g(Àa) and then displace it by some distance x to give g(x À a).
                      2. For all values of x, multiply f(a) by g(x À a). The product will be nonzero at
                         all points where the functions overlap.
                      3. Integrating this product yields the convolution between f(x) and g(x).
                      Hence, the properties of the convolution are determined by the independent
                   function f(x) and a function g(x) that selects for certain desired details in the
                   function f(x). The selecting function g(x) is therefore analogous to the forward
                   transformation kernel in frequency space except that it selects for features in
                   coordinate space instead of frequency space. This clearly makes the convolution
                   an important image-processing technique for microscopists who are interested in
                   feature extraction.
306                                                          Richard A. Cardullo and Edward H. HinchcliVe

                        A f (x)                               B g(x)

                                                               2



                                                         ϫ
                          1



                                                     x                                       x
                              0      1                             0       1




                                     C f (x) g(x)

                                          2




                                                                                  x
                                              0                2

      Fig. 15 Graphical representation of one-dimensional convolution. (A) In this simple example, the
      function, f(x), to be convolved is a square pulse of equal height and width. (B) The convolving function,
      g(x), is a rectangular pulse which is twice as high as it is wide. The convolving function is then reflected
      and is then moved from À1 to þ1. (C) In all areas where there is no overlap, the products of f(x) and
      g(x) is zero. However, g(x) overlaps f(x) in diVerent amounts from x ¼ 0 to x ¼ 2 with maximum overlap
      occurring at x ¼ 1. The operation therefore detects the trailing edge of f(x) at x ¼ 0 and the convolution
      results in a triangle which increases in height from 0 to 2 for 0  x 1 and decreases in height from 2 to
      0 for 1 x  2.


        One simple application of convolutions is the convolution of a function with an
      impulse function (commonly known as a delta function), d(x À x0):
                              Z þ1
                                    f ðxÞdðx À x0 Þdx ¼ f ðx0 Þ
                                         À1

      For our purposes, d(x À x0) is located at x ¼ x0 and the intensity of the impulse is
      determined by the value f(x) at x ¼ x0 and is zero everywhere else. In this example,
      we will let the kernel g(x) represent three impulse functions separated by a period, t:

                                     gðxÞ ¼ dðx þ tÞ þ dðxÞ þ dðx À tÞ

        As shown in Fig. 16, the convolution of the square pulse f(x) with these three
      impulses results in a copying of f(x) at the impulse points.
14. Digital Manipulation of Brightfield and Fluorescence Images                                                         307

                                  A f(x)                               B                g(x )




                                                               ϫ


                                                           x                                              x
                                                                              −t          0         +t



                                                    C           f (x) g (x)




                                                                                                x
                                                         −t             0          +t

                   Fig. 16 Using a convolution to copy an object. (A) The function f(x) is a rectangular pulse of
                   amplitude, A, with its leading edge at x ¼ 0. (B) The convolving functions g(x) are three delta functions
                   at x ¼ Àt, x ¼ 0, and x ¼ þt. (C) The convolution operation f(x)g(x) results in copying of the three
                   rectangular pulses at x ¼ Àt, x ¼ 0, and x ¼ þt.




                     As with Fourier transforms, the actual mechanics of convolution can rapidly
                   become computationally intensive for a large number of points. Fortunately,
                   many complex procedures can be adequately performed using a variety of digital
                   masks as illustrated in Section VI.C.



C. Digital Masks as Convolution Filters
                      For many purposes, the appropriate digital mask can be used to extract
                   features from images. The convolution filter, acting as a selection function g(x),
                   can be used to modify images in a particular fashion. Convolution filters reas-
                   sign intensities by multiplying the gray value of each pixel in the image by
                   the corresponding values in the digital mask and then summing all the values; the
                   resultant is then assigned to the center pixel of the new image and the operation is
                   then repeated for every pixel in the image (Fig. 17). Convolution filters can vary in
                   size (i.e., 3 Â 3, 5 Â 5, 7 Â 7, and so on) depending on the type of filter chosen
                   and the relative weight that is required from neighboring values from the center
                   pixel.
308                                                          Richard A. Cardullo and Edward H. HinchcliVe




      Fig. 17 Performing convolutions using a digital mask. The convolution mask is applied to each pixel
      in the image. The value assigned to the central pixel results from multiplying each element in the mask
      by the gray value in the corresponding image, summing the result, and assigning the value to the
      corresponding pixel in a new image buVer. The operation is repeated for every pixel resulting in the
      processed image. Or diVerent operations, a scalar multiplier and/or oVset may be needed.


        For example, consider a simple 3 Â 3 convolution filter, which has the form:

                                                1/9    1/9      1/9

                                                1/9    1/9      1/9

                                                1/9    1/9      1/9
14. Digital Manipulation of Brightfield and Fluorescence Images                                       309

                   Applied to a pixel with an intensity of 128 and surrounded by other intensity
                   values as follows:


                                                          123    62    97

                                                          237    128    6

                                                           19    23    124


                      The gray value in the processed image at that pixel, therefore, would have a new
                   value of 1/9(123 þ 62 þ 97 þ 237 þ 128 þ 6 þ 19 þ 2 þ 124) ¼ 819/9 ¼ 91. Note
                   that this convolution filter is simply an averaging filter identical to the operation
                   described in Section IV (in contrast, a median filter would have returned a value of
                   128). A 5 Â 5 averaging filter would simply be a mask, which contains 1/25 in each
                   pixel whereas a 7 Â 7 averaging filter would contain 1/49 in each pixel. Since the
                   speed of processing decreases with the size of the digital mask, the most frequently
                   used filters are 3 Â 3 masks.
                      In practice, the values found in the digital masks tend to be integer values with a
                   divisor that can vary depending on the desired operation. In addition, because
                   many operations can lead to resultant values, which are negative (since the values
                   in the convolution filter can be negative), oVset values are often used to prevent
                   this from occurring. In the example of the averaging filter, the values in the kernel
                   would be:


                                                             1    1    1

                                                             1    1    1

                                                             1    1    1


                   with a divisor value of 9 and an oVset of zero. In general, for an 8-bit image,
                   divisors and oVsets are chosen so that all processed values following the convolu-
                   tion fall between 0 and 255.
                      Understanding the nature of convolution filters is absolutely necessary when
                   using the microscope as a quantitative tool. User-defined convolution filters can
                   be used to extract information specific for a particular application. When begin-
                   ning to use these filters, it is important to have a set of standards, which the filters
                   can be applied to in order to see if the desired eVect has been achieved. In general,
                   the best test objects for convolution filters are simple geometric objects such as
                   squares, grids, isosceles and equilateral triangles, circles, and so on. Many com-
                   mercially available graphics packages provide such test objects in a variety of
                   graphics formats. Examples of some widely used convolution masks are given in
                   the following sections.
310                                                        Richard A. Cardullo and Edward H. HinchcliVe


1. Point Detection in a Uniform Field
                  Assume that an image consists of a series of grains on a constant background
               (e.g., a dark-field image of a cellular autoradiogram). The following 3 Â 3 mask is
               designed to detect these points:
                                                 À1   À1      À1
                                                 À1   þ8      À1
                                                 À1   À1      À1

                 When the mask encounters a uniform background, then the gray values in the
               processed center pixel will be zero. If, on the other hand, a value above the
               constant background is encountered, then its value will be amplified above that
               background and a high-contrast image will result.


2. Line Detection in a Uniform Field
                 Similar to the point mask in the previous example, a number of line masks can
               be used to detect sharp, orthogonal edges in an image. These line masks can be
               used alone or in tandem to detect horizontal, vertical, or diagonal edges in an
               image. Horizontal and vertical line masks are represented as:

                                                  À1 À1      À1
                                                  þ2 þ2      þ2
                                                  À1 À1      À1


                                                  À1 þ2      À1
                                                  À1 þ2      À1
                                                  À1 þ2      À1


               whereas, diagonal line masks are given as:

                                                  À1 À1      þ2

                                                  À1 þ2      À1

                                                  þ2 À1      À1



                                                  þ2 À1      À1

                                                  À1 þ2      À1

                                                  À1 À1      þ2
14. Digital Manipulation of Brightfield and Fluorescence Images                                      311

                     In any line mask, the direction of nonpositive values used reflects the direction
                   of the line detected. When choosing the type of line mask to be utilized, the user
                   must a priori know the directions of the edges to be enhanced.



3. Edge Detection-Computing Gradients
                      Of course, lines and points are seldom encountered in natures and another
                   method for detecting edges would be desirable. By far, the most useful edge
                   detection procedure is one that picks up any inflection point in intensity. This is
                   best achieved by using gradient operators, which take the first derivative of light
                   intensity in both the x- and y-directions. One type of gradient convolution filters,
                   which are often used, is the Sobel filter. An example of a Sobel filter, which
                   calculates horizontal edges, is the Sobel North filter expressed as the following
                   3 Â 3 kernel:


                                                           þ1    þ2   þ1
                                                            0    0    0
                                                           À1    À2   À1



                     This filter is generally not used alone, but is instead used along with the Sobel
                   East filter, which is used to detect vertical edges in an image. The 3 Â 3 kernel for
                   this filter is:


                                                           À1    0    þ1
                                                           À2    0    þ2
                                                           À1    0    þ1



                      These two Sobel filters can be used to calculate both the angle of edges in an
                   image and the relative steepness of intensity (i.e., the derivative of intensity
                   with respect to position) of that image. The so-called Sobel Angle filter returns
                   arctangent of the ratio of the resultant Sobel North filtered pixel value to the Sobel
                   East filtered pixel value while the Sobel Magnitude filter calculates a resultant
                   value from the square root of the sum of the squares of the Sobel North and Sobel
                   East values.
                      In addition to Sobel filters, a number of diVerent gradient filters can be used
                   (specifically Prewitt or Roberts gradient filters) depending on the specific applica-
                   tion. Figure 18 shows the design and outlines the basic properties of these filters,
                   and Fig. 19 shows the eVects of these filters on a fluorescence micrograph.
312                                                                                 Richard A. Cardullo and Edward H. HinchcliVe

                        Name                        Kernels                                                   Uses

                                               −1     +1      +1
                                                                                              Detects the vertical edges of
                       Gradient                −1      2      +1                              objects in an image.
                                               −1     +1      +1


                                                                                              North detects horizontal edges; East
                                                                                              detects vertical edges. North and East
                                  +1    +2     +1             −1           0        +1        used to calculate Sobel Angle and
                       Sobel      0     0      0              −2           0        +2        Sobel Magnitude (see test). Filters
                                                                                              should not be used independently; if
                                  −1    −2     −1             −1           0        +1        horizontal or vertical detection is
                                                                                              desired, use Prewitt.
                                       North                           East



                                  +1    +1     +1             −1           0        +1
                                                                                              North detects horizontal edges;
                       Prewitt    0      0     0              −1           0        +1        East detects vertical edges.
                                  −1     −1 −1                −1           0        +1
                                       North                           East


                                                                                              Northeast detects diagonal
                                        0      1                       1        0             edges from top-left to bottom-
                       Roberts                                                                right; Northwest detects
                                       −1      0                       0        −1            diagonal edges from top-right to
                                       Northeast                   Northwest                  bottom-left.

                 Fig. 18 DiVerent 3 Â 3 gradient filters used in imaging. Shown are four diVerent gradient operators
                 and their common uses in microscopy and imaging.


4. Laplacian Filters
                    Laplacian operators calculate the second derivative of intensity with respect to
                 position and are useful for determining whether a pixel is on the dark side or light
                 side of an edge. Specifically, the Laplace-4 convolution filter, given as:


                                                                   0           À1        0

                                                               À1              þ4        À1

                                                                   0           À1        0



                 detects the light and dark sides of an edge in an image. Because of its sensitivity to
                 noise, this convolution mask is seldom used by itself as an edge detector. In order
                 to keep all values of the processed image within 8 bits and positive, a divisor of
                 8 and an oVset value of 128 are often employed.
14. Digital Manipulation of Brightfield and Fluorescence Images                                                     313




                   Fig. 19 DiVerent filters applied to a fluorescence image of a dividing mammalian cell. Inverse contrast
                   LUT, gradient filter, Laplacian filter, Sobel filter.




                     The point detection filter shown earlier is also a kind of Laplace filter (known
                   as the Laplace-8 filter). This filter uses a divisor value of 16 and an oVset value of
                   128. Unlike the Laplace-4 filter, which only enhances edges, the Laplace-8 filter
                   enhances edges and other features of the object.



                   VII. Conclusions
                     The judicious choice of image-processing routines can greatly enhance an image
                   and can extract features, which are not otherwise possible. When applying digital
                   manipulations to an image, it is imperative to understand the routines that are
                   being employed and to make use of well-designed standards when testing them
                   out. With the advent of high-speed digital detectors and computers, near real-time
                   processing involving moderately complicated routines is now possible.
314                                                              Richard A. Cardullo and Edward H. HinchcliVe


References
             Andrews, H. C., and Hunt, B. R. (1977). ‘‘Digital Image Restoration.’’ Prentice-Hall, Englewood CliVs,
               NJ.
             Bates, R. H. T., and McDonnell, M. J. (1986). ‘‘Image Restoration and Construction.’’ Oxford
               University Press, New York, NY.
             Cardullo, R. A. (1999). Electronic and computer image enhancement in light microscopy. In ‘‘Encyclope-
               dia of Life Sciences.’’ Wiley  Sons, Hoboken, NJ.
             Castleman, K. R. (1979). ‘‘Digital Image Processing.’’ Prentice-Hall, Englewood CliVs, NJ.
             Chellappa, R., and Sawchuck, A. A. (1985). ‘‘Digital Image Processing and Analysis.’’ IEEE Press,
               New York, NY.
             Erasmus, S. J. (1982). Reduction of noise in a TV rate electron microscope image by digital filtering.
               J. Microsc. 127, 29–37.
             Gonzalez, R. C., and Wintz, P. (1987). ‘‘Digital Image Processing.’’ Addison-Wesley, Reading, MA.
             Green, W. B. (1989). ‘‘Digital Image Processing: A Systems Approach.’’ Van Nostrand Reinhold,
               New York, NY.
             Inoue, S. (1986). ‘‘Video Microscopy.’’ Plenum, New York, NY.
             Inoue, S., and Spring, K. R. (1997). ‘‘Video Microscopy,’’ 2nd edn. Plenum, New York, NY.
             Jahne, B. (1991). ‘‘Digital Image Processing.’’ Springer-Verlag, New York, NY.
             Pratt, W. K. (1978). ‘‘Digital Image Processing.’’ Wiley, New York, NY.
             Russ, J. C. (1990). ‘‘Computer-Assisted Microscopy. The Measurement and Analysis of Images.’’
               Plenum, New York, NY.
             Russ, J. C. (1994). ‘‘The Image Processing Handbook.’’ CRC Press, Ann Arbor, MI.
             Shotton, D. (1993). ‘‘Electronic Light Microscopy: Techniques in Modern Biomedical Microscopy.’’
               Wiley-Liss, New York, NY.

More Related Content

What's hot

A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...
A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...
A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...IDES Editor
 
Bidirectional bias correction for gradient-based shift estimation
Bidirectional bias correction for gradient-based shift estimationBidirectional bias correction for gradient-based shift estimation
Bidirectional bias correction for gradient-based shift estimationTuan Q. Pham
 
Block Matching Project
Block Matching ProjectBlock Matching Project
Block Matching Projectdswazalwar
 
Keynote Virtual Efficiency Congress 2012
Keynote Virtual Efficiency Congress 2012Keynote Virtual Efficiency Congress 2012
Keynote Virtual Efficiency Congress 2012Christian Sandor
 
Machine Learning
Machine LearningMachine Learning
Machine Learningbutest
 
Image pre processing-restoration
Image pre processing-restorationImage pre processing-restoration
Image pre processing-restorationAshish Kumar
 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Editor IJARCET
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Keynote at 23rd International Display Workshop
Keynote at 23rd International Display WorkshopKeynote at 23rd International Display Workshop
Keynote at 23rd International Display WorkshopChristian Sandor
 
State of art pde based ip to bt vijayakrishna rowthu
State of art pde based ip to bt  vijayakrishna rowthuState of art pde based ip to bt  vijayakrishna rowthu
State of art pde based ip to bt vijayakrishna rowthuvijayakrishna rowthu
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...ijsrd.com
 
SIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation Slide
SIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation SlideSIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation Slide
SIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation SlideI-Chao Shen
 
Shadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective ViewShadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective Viewijtsrd
 

What's hot (18)

A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...
A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...
A New Watermarking Algorithm Based on Image Scrambling and SVD in the Wavelet...
 
Ao25246249
Ao25246249Ao25246249
Ao25246249
 
Bidirectional bias correction for gradient-based shift estimation
Bidirectional bias correction for gradient-based shift estimationBidirectional bias correction for gradient-based shift estimation
Bidirectional bias correction for gradient-based shift estimation
 
LudovicGustafssonCoppelPhDpresentation
LudovicGustafssonCoppelPhDpresentationLudovicGustafssonCoppelPhDpresentation
LudovicGustafssonCoppelPhDpresentation
 
Block Matching Project
Block Matching ProjectBlock Matching Project
Block Matching Project
 
In2414961500
In2414961500In2414961500
In2414961500
 
Keynote Virtual Efficiency Congress 2012
Keynote Virtual Efficiency Congress 2012Keynote Virtual Efficiency Congress 2012
Keynote Virtual Efficiency Congress 2012
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Image pre processing-restoration
Image pre processing-restorationImage pre processing-restoration
Image pre processing-restoration
 
Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251Ijarcet vol-2-issue-7-2246-2251
Ijarcet vol-2-issue-7-2246-2251
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Keynote at 23rd International Display Workshop
Keynote at 23rd International Display WorkshopKeynote at 23rd International Display Workshop
Keynote at 23rd International Display Workshop
 
Image Interpolation
Image InterpolationImage Interpolation
Image Interpolation
 
State of art pde based ip to bt vijayakrishna rowthu
State of art pde based ip to bt  vijayakrishna rowthuState of art pde based ip to bt  vijayakrishna rowthu
State of art pde based ip to bt vijayakrishna rowthu
 
Color theory and 2D graphics
Color theory and 2D graphicsColor theory and 2D graphics
Color theory and 2D graphics
 
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
 
SIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation Slide
SIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation SlideSIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation Slide
SIGGRAPH ASIA 2012 Stereoscopic Cloning Presentation Slide
 
Shadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective ViewShadow Detection and Removal Techniques A Perspective View
Shadow Detection and Removal Techniques A Perspective View
 

Viewers also liked

GRUPO 4 : new algorithm for image noise reduction
GRUPO 4 :  new algorithm for image noise reductionGRUPO 4 :  new algorithm for image noise reduction
GRUPO 4 : new algorithm for image noise reductionviisonartificial2012
 
GRUPO 5 : novel fuzzy logic based edge detection technique
GRUPO 5 :  novel fuzzy logic based edge detection techniqueGRUPO 5 :  novel fuzzy logic based edge detection technique
GRUPO 5 : novel fuzzy logic based edge detection techniqueviisonartificial2012
 
Sobrado eddie vision_artificial_brazo_robot
Sobrado eddie vision_artificial_brazo_robotSobrado eddie vision_artificial_brazo_robot
Sobrado eddie vision_artificial_brazo_robotviisonartificial2012
 
Sistema de visión artificial para el reconocimiento y
Sistema de visión artificial para el reconocimiento ySistema de visión artificial para el reconocimiento y
Sistema de visión artificial para el reconocimiento yviisonartificial2012
 
Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...
Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...
Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...viisonartificial2012
 

Viewers also liked (6)

GRUPO 4 : new algorithm for image noise reduction
GRUPO 4 :  new algorithm for image noise reductionGRUPO 4 :  new algorithm for image noise reduction
GRUPO 4 : new algorithm for image noise reduction
 
GRUPO 5 : novel fuzzy logic based edge detection technique
GRUPO 5 :  novel fuzzy logic based edge detection techniqueGRUPO 5 :  novel fuzzy logic based edge detection technique
GRUPO 5 : novel fuzzy logic based edge detection technique
 
Sobrado eddie vision_artificial_brazo_robot
Sobrado eddie vision_artificial_brazo_robotSobrado eddie vision_artificial_brazo_robot
Sobrado eddie vision_artificial_brazo_robot
 
Sistema de visión artificial para el reconocimiento y
Sistema de visión artificial para el reconocimiento ySistema de visión artificial para el reconocimiento y
Sistema de visión artificial para el reconocimiento y
 
GRUPO 2 : convolution separable
GRUPO 2 :  convolution separableGRUPO 2 :  convolution separable
GRUPO 2 : convolution separable
 
Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...
Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...
Dimensionamiento de piezas en un sistema de visión aplicado a una celda de ma...
 

More from viisonartificial2012 (8)

Riai
RiaiRiai
Riai
 
Detección de defectos en carrocerías de vehículos basado
Detección de defectos en carrocerías de vehículos basadoDetección de defectos en carrocerías de vehículos basado
Detección de defectos en carrocerías de vehículos basado
 
Sistema de visión artificial
Sistema de visión artificialSistema de visión artificial
Sistema de visión artificial
 
Talelr sistemas de vision artifiacial
Talelr sistemas de vision artifiacialTalelr sistemas de vision artifiacial
Talelr sistemas de vision artifiacial
 
Ejemplo
EjemploEjemplo
Ejemplo
 
77 1
77 177 1
77 1
 
Electiva b2diaposs
Electiva b2diapossElectiva b2diaposs
Electiva b2diaposs
 
Electiva b2diaposs
Electiva b2diapossElectiva b2diaposs
Electiva b2diaposs
 

Recently uploaded

New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfLoriGlavin3
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .Alan Dix
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Manik S Magar
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 

Recently uploaded (20)

New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Moving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdfMoving Beyond Passwords: FIDO Paris Seminar.pdf
Moving Beyond Passwords: FIDO Paris Seminar.pdf
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .From Family Reminiscence to Scholarly Archive .
From Family Reminiscence to Scholarly Archive .
 
Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!Anypoint Exchange: It’s Not Just a Repo!
Anypoint Exchange: It’s Not Just a Repo!
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data PrivacyTrustArc Webinar - How to Build Consumer Trust Through Data Privacy
TrustArc Webinar - How to Build Consumer Trust Through Data Privacy
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 

GRUPO 1 : digital manipulation of bright field and florescence images noise reduction contrast enhancement and feature extraction

  • 1. CHAPTER 14 Digital Manipulation of Brightfield and Fluorescence Images: Noise Reduction, Contrast Enhancement, and Feature Extraction Richard A. Cardullo* and Edward H. HinchcliVe† *Department of Biology, The University of California Riverside, California 92521 † Department of Biological Sciences University of Notre Dame, Notre Dame, Indiana 46556 I. Introduction II. Digitization of Images III. Using Gray Values to Quantify Intensity in the Microscope IV.Noise Reduction A. Temporal Averaging B. Spatial Methods V. Contrast Enhancement VI. Transforms, Convolutions, and Further Uses for Digital Masks A. Transforms B. Convolution C. Digital Masks as Convolution Filters VII. Conclusions References I. Introduction The theoretical basis of image processing along with its applications is an extensive topic that cannot be adequately covered here but has been presented in a number of texts dedicated exclusively to this topic (Andrews and Hunt, 1977; METHODS IN CELL BIOLOGY, VOL. 81 0091-679X/07 $35.00 Copyright 2007, Elsevier Inc. All rights reserved. 285 DOI: 10.1016/S0091-679X(06)81014-9
  • 2. 286 Richard A. Cardullo and Edward H. HinchcliVe Bates and McDonnell, 1986; Chellappa and Sawchuck, 1985; Gonzalez and Wintz, 1987; Inoue and Spring, 1997; Russ, 1994; Shotton, 1993). In this chapter, the basic principles of image processing used routinely by microscopists will be presented. Since image processing allows the investigator to convert the microscope/detector system into a quantitative device, this chapter will focus on three basic problems: (1) reducing ‘‘noise,’’ (2) enhancing contrast, and (3) quantifying intensity of an image. These techniques can then be applied to a number of diVerent methodolo- gies such as video-enhanced diVerential interference microscopy (VEDIC; Chapter 16 by Salmon and Tran, this volume), nanovid microscopy, fluorescence recovery after photobleaching, fluorescence correlation spectroscopy, fluorescence reso- nance energy transfer, and fluorescence ratio imaging (Cardullo, 1999). In all cases, knowledge of the basic principles of microscopy, image formation, and image-processing routines is absolutely required to convert the microscope into a device capable of pushing the limits of resolution and contrast. II. Digitization of Images An image must first be digitized before an arithmetic, or logical, operation can be performed on it (Pratt, 1978). For this discussion, a digital image is a discrete representation of light intensity in space (Fig. 1). A particular scene can be viewed as being continuous in both space and light intensity and the process of digitization converts these to discrete values. The discrete representation of intensity is com- monly referred to as gray values whereas the discrete representation of position is given as picture elements, or pixels. Therefore, each pixel has a corresponding gray value which is related to light intensity [e.g., at each coordinate (x,y) there is a corresponding gray value designated as GV(x,y)]. The key to digitizing an image is to provide enough pixels and grayscale values to adequately describe the original image. Clearly, the fidelity of reproduction between the true image and the digitized image depends on both the spacing between the pixels (e.g., the number of bits that map the image) and the number of gray values used to describe the intensity of that image. Figure 1B shows a theoretical one-dimensional scan across a portion of an image. Note that the more pixels used to describe, or sample, an image, the better the digitized image reflects the true nature of the original. Conversely, as the number of pixels is progressively reduced, the true nature of the original image is lost. When choosing the digitizing device for a microscope, particular attention must be paid to matching the resolution limit of the microscope ($0.2 mm for visible light, see Chapter 1 by Sluder and Nordberg, this volume) to the resolution limit of the digitizer (Inoue, 1986). A digitizing array that has an eVective separa- tion of 0.05 mm per pixel is, at best, using four pixels to describe resolvable objects in a microscope resulting in a highly digitized representation of the original image (note that this is most clearly seen when using the digitized zoom feature of many imaging devices which results in a ‘‘boxy’’ image representation). In contrast, a
  • 3. 14. Digital Manipulation of Brightfield and Fluorescence Images 287 A B 60 60 50 50 Gray value Intensity 40 40 30 30 20 20 0 2 4 6 8 10 0 5 10 15 20 25 30 Position (mm) Pixel number C D 60 60 50 50 Gray value Gray value 40 40 30 30 20 20 0 4 8 12 16 0 2 4 6 8 Pixel number Pixel number Fig. 1 (A) A densitometric line scan through a microscopic image is described by intensity values on the y-axis and its position along the x-axis. (B) A 6-bit-digitized representation (64 gray values) of the object in (A), with 32 bits used to describe the position across 10 mm. The digital representation captures the major details of the original object but some finer detail is lost. Note that the image is degraded further when the position is described by only (C) 16 bits or (D) 8 bits. digitizer which has pixel elements separated by 1 mm eVectively averages gray values five times above the resolution limit of the microscope resulting in a degraded representation of the original image. In addition to assigning the number of pixels for an image, it is also important to know the number of gray values needed to faithfully represent the intensity of that image. In Fig. 1B, the original image has been digitized at 6-bit resolution (6 bits ¼ 26 ¼ 64 gray values from 0 to 63). The image could be better described by more gray values (e.g., 8 bits ¼ 256 gray levels) but would be poorly described by less gray values (e.g., 2 bits ¼ 4 gray values). The decision on how many pixels and gray values are needed to describe an image is dictated by the properties of the original image. Figure 1 represents a low- contrast, high-resolution image which needs many gray scales and pixels to adequately describe it. However, some images are by their very nature high
  • 4. 288 Richard A. Cardullo and Edward H. HinchcliVe contrast and low resolution and require less pixels and gray values to describe it (e.g., a line drawing may require only 1 bit of gray-level depth, black or white). Ultimately, the trade-oV is one in contrast, resolution, and speed of processing. The more descriptors used to represent an image, the slower the processing routines will be performed. In general, an image should be described by as few pixels and gray values as needed so that speed of processing can be optimized. For many applications, the user can select a narrower window, or region of interest (ROI), within the image to speed up processing. III. Using Gray Values to Quantify Intensity in the Microscope A useful feature shared by all image processors is that they allow the microsco- pist a way to quantify image intensity values into some meaningful parameter (Green, 1989; Russ, 1990). In standard light microscopy, the light intensity—and therefore the digitized gray values—is related to the optical density (OD) which is proportional to the log of the relative light intensity. In dilute solutions (i.e., in the absence of significant light scattering), the OD is proportional to the concentra- tion of absorbers, C, the molar absorptivity, e, and the path length, l, through the vessel containing the absorbers. In such a situation, the OD is related to these parameters using Beer’s law: I0 OD ¼ log ¼ eCl I where I and I0 are the intensities of light in the presence and absence of absorber, respectively. Within dilute solutions, it therefore might be possible to equate a change in OD with changes either in molar absorptivity, path length, or concen- tration. However, with objects as complex as cells, all three parameters can vary tremendously and the utility of using OD to measure a change in any one parameter is diYcult. Although diYcult to interpret in cells, measuring changes in digitized gray values in an OD wedge oVers the investigator a good way to calibrate an entire microscope system coupled to an image processor. Figure 2 shows such a calibra- tion using a brightfield microscope coupled to a CCD camera and an image processor. The wedge had 0.15-OD increments. The camera/image processor unit was digitized to 8 bits (0–255) and the median gray value was recorded for a 100 Â 100 pixel array (the ROI) in each step of the wedge. In this calibration, the black level of the camera was adjusted so that the highest OD corresponded to a gray value of 5. At the other end of the scale (the lowest OD used), the relative intensity was normalized so that I/I0 was equal to 1 and the corresponding gray value was $95% of the maximum gray value ($243). As seen in Fig. 2, as the step wedge is moved through the microscope, the median value of the gray value increased as the log of I/I0. In addition to acting as a useful calibration, this figure
  • 5. 14. Digital Manipulation of Brightfield and Fluorescence Images 289 200 100 70 Gray value 50 30 20 10 5 0.03 0.1 0.3 1 Relative intensity Fig. 2 Calibration of a detector using an image processor. The light intensity varied incrementally using an OD step wedge (0.15-OD increments), and the gray value was plotted as a function of the normalized intensity. In this instance the camera/image processor system was able to quantify diVer- ences in light intensity over a 40-fold range. shows that an 8-bit processor can reliable quantify changes in light intensity over two orders of magnitude. IV. Noise Reduction The previous sections have assumed that the object being imaged is relatively free of noise and is of suYcient contrast to generate a usable image. Although this may be true in some instances, the ultimate challenge in many applications is to obtain reliable quantitative information from objects which produce a low-contrast, noisy signal (Erasmus, 1982). This is particularly true in cell physiological measurements using specialized modes of microscopy such as VEDIC, fluorescence ratio imaging, nanovid microscopy, and so on. There are diVerent ways to reduce noise and the methods of noise reduction chosen depend on many diVerent factors, including the source of the noise, the type of camera employed for a particular application, and the contrast of the specimen. For the purposes of this chapter, we shall distinguish between temporal and spatial techniques to increase the signal-to-noise ratio (SNR) of an image. A. Temporal Averaging In most low-light level applications, there is considerable amount of shot noise associated with the signal. If quantitation is needed, it is often necessary to reduce the amount of shot noise in order to improve the SNR. Because this type of noise reduction requires averaging over a number of frames (!2 frames), this method is
  • 6. 290 Richard A. Cardullo and Edward H. HinchcliVe best used for static objects. Clearly, temporal averaging can be diYcult to use for optimizing contrast for dynamic processes such as cell movement, detecting rapid changes in intracellular ion concentrations over time, quantifying molecular mo- tions using fluorescence recovery after photobleaching, or single particle tracking. The trade-oV is between improving SNR versus blurring or missing the capture of a dynamic event. Current digital microscopy equipment allows for very short exposure times, even with the low-light levels associated with live cell imaging. Thus, frame averaging can be an acceptable solution to improve SNR, provided that the light exposures are suYciently short (Fig. 3). Assume that at any given time, t, within a given pixel, a signal, Si(t), represents both the true image, I, which may be inclusive of background, and some source of noise, Ni(t). Since the noise is stochastic in nature, Ni(t) will vary in time, taking on both positive and negative values, and the signal, Si(t), will vary about some mean value. For each frame, the signal is therefore just: Si ðtÞ ¼ I þ Ni ðtÞ As the signal is averaged over M frames, an average value for Si(t) and Ni(t) is obtained: hSi iM ¼ I þ hNi iM where hSiiM and hNiiM represent the average value of Si(t) and Ni(t) over M frames. As the number of frames, M, goes to infinity, the average value of Ni goes to zero and therefore: hSi iM!1 ¼ I The question facing the microscopist is how large M should be so the SNR is acceptable. This is determined by a number of factors including the magnitude of the original signal, the amount of noise, and the degree of precision required by the particular quantitative measurement. A quantitative measure of noise reduc- tion can be obtained by looking at the standard deviation of the noise, which decreases inversely as the square root of the number of frames (sM ¼ s0/√M). Therefore, averaging a field for 4 frames will give a 2-fold improvement in the SNR, averaging for 16 frames yields a 4-fold improvement, while averaging for 256 frames yields a 16-fold improvement. At some point the user obviously reaches a point of diminishing returns where the noise level is below the resolution limit of the digitizer and any improvement in SNR is minimal (Fig. 4). Although frame-averaging techniques are not always appropriate for moving objects, it is possible to apply a running average where the resulting image is a weighted sum of all previous frames. Because the image is constantly updated on a frame-by-frame basis, these types of recursive techniques are useful for following moving objects but improvement in SNR is always less than that obtained with the simple averaging technique outlined in the previous paragraph (Erasmus, 1982).
  • 7. No frame averaging 4ϫ frame averaging 8ϫ frame averaging Fig. 3 Frame averaging improves the SNR. A dividing mammalian cell expressing GFP-a tubulin was imaged using a spinning disk confocal microscope. Images were collected with no frame averaging, 4 frames averaged or 8 frames averaged. Random noise is reduced due to frame averaging.
  • 8. 292 Richard A. Cardullo and Edward H. HinchcliVe 1.0 1.0 0.8 0.8 0.6 0.4 0.6 0.2 Noise 0.0 0 2 4 6 8 10 12 14 16 0.4 0.2 0.0 0 32 64 96 128 160 192 224 256 Number of frames averaged Fig. 4 Reduction in noise as a function of the numbers of frames averaged. The noise is reduced inversely as the square root of the number of frames averaged. In this instance, the noise was normalized to the average value obtained for a single frame. The major gain in noise reduction is obtained after averaging very few frames (inset) and averaging for more than 64 frames leads to only minor gains in the SNR. Additional recursive filters are possible which optimize the SNR but these are typically not available on commercial image processors. B. Spatial Methods A number of spatial techniques are available which allow the user to reduce noise on a pixel-by-pixel basis. The simplest of these techniques generally uses simple arithmetic operations within a single frame or, alternatively, between two diVerent frames. In general, these types of routines involve either image subtrac- tion or division from a background image or calculate a mean or median value around the neighborhood of a particular pixel. More sophisticated methods use groups of pixels (known as masks, kernels, or filters), which perform higher-order functions to extract particular features from an image. These types of techniques will be discussed separately in Section VI.
  • 9. 14. Digital Manipulation of Brightfield and Fluorescence Images 293 1. Arithmetic Operations Between an Object and a Background Image If one has an image, which has a constant noise component in a given pixel in each frame, the noise component can be removed by performing a simple subtrac- tion that removes the noise and optimizes the SNR. Although SNR is improved, subtraction methods can also significantly decrease the dynamic range but these problems can generally be avoided when the microscope and camera systems are adjusted to give the optimal signal. Any constant noise component can be removed by subtraction and, in general, it is always best to subtract the noise component from a uniform background image (Fig. 5). Thus, if a pixel within an image has a gray value of say 242, with the background having a gray value of 22 in that same pixel, then a simple A B 250 250 200 200 Gray value Gray value 150 150 100 100 50 50 0 0 0 10 20 30 40 50 60 0 10 20 30 40 50 60 Pixel number Pixel number C 250 200 Gray value 150 100 50 0 0 10 20 30 40 50 60 Pixel number Fig. 5 Subtracting noise from an image. (A) Line scan across an object and the surrounding background. (B) Line scan across the background alone reveals variations in intensity that may be due to uneven light intensity across the field, camera defects, dirt on the optics, and so on. (C) Image in (A) subtracted from image in (B). The result is a ‘‘cleaner’’ image with a higher SNR in the processed image compared to the original in (A).
  • 10. 294 Richard A. Cardullo and Edward H. HinchcliVe subtraction would yield a resultant value of 220. Image subtraction therefore preserves the majority of the signal, and the subtracted image can then be pro- cessed further using other routines. In order to reduce temporal noise, both images can be first averaged as described in Section IV.A. 2. Concept of a Digital Mask A number of mathematical manipulations of images involve using an array (or a digital mask) around a neighborhood of a particular pixel. These digital masks can be used either to select particular pixels from the neighborhood (as in averaging or median filtering discussed in Section IV.B.3) or alternatively can be used to apply some mathematical weighting function to an image on a pixel-by-pixel basis to extract particular features from that image (discussed in detail in Section VI). When the mask is overlaid on an image, the particular mathematical operation is per- formed and the resultant value is placed into the same array position and the operation is performed repeatedly until the entire image has been transformed. Although a digital mask can take on any shape, the most common masks are square with the center pixel being the particular pixel being operated on at any given time (Fig. 6). The most common masks are 3 Â 3 or 5 Â 5 arrays so that only the nearest neighbors will have an eVect on the pixel being operated on. Additionally, larger arrays will greatly increase the number of computations that need to be performed which can significantly slow down the rate of processing a particular image. px − 1,y + 1 px,y + 1 px + 1,y + 1 px − 1,y px,y px + 1,y px − 1,y − 1 px,y − 1 px + 1,y − 1 Fig. 6 Digital mask used for computing means, averages, and higher-order mathematical operations, especially convolutions. In the case of the median and average filters, the mask is overlaid over each pixel in the image and the resultant value is calculated and placed into the identical pixel location in a new image buVer.
  • 11. 14. Digital Manipulation of Brightfield and Fluorescence Images 295 3. Averaging Versus Median Filters When an image contains random and infrequent intensity spikes with particular pixels, a digital mask can be used around each pixel to remove them. Two common ways to remove these intensity spikes are to calculate either the average value or the median value within the neighborhood and assign that value to the center pixel in the processed image (Fig. 7). The choice of filter used will depend on the type of processed image that is desired. Although both types of filters will A 200 23 25 26 192 27 47 52 56 55 59 150 Gray value 26 27 23 27 135 42 55 116 52 61 117 26 28 29 31 175 52 56 57 59 25 28 19 31 186 49 5 57 56 52 100 29 27 178 57 30 51 195 7 55 57 26 22 26 42 12 46 57 55 142 54 50 0 2 4 6 8 Position 100 80 Gray value B 36 45 58 78 68 72 61 63 60 35 26 57 78 81 67 56 63 40 53 47 65 71 86 72 60 51 20 42 48 65 56 71 58 70 59 0 2 4 6 8 Position 100 C 80 Gray value 26 27 28 42 52 55 55 57 27 27 29 42 52 56 56 57 60 28 28 31 49 51 52 56 56 40 26 28 31 46 49 51 56 55 20 0 2 4 6 8 Position Fig. 7 Comparison of 3 Â 3 averaging and median filters to reduce noise. (A) Digital representation of an image displaying gray values at diVerent pixel locations. In general, the object possesses a boundary which is detected as a line scan from left to right. However, the image has a number of intensity spikes which significantly mask the true boundary. A line scan across a particular row (row denoted by arrow and scan is on the right-hand side) reveals both high- and low-intensity values which greatly distort the image. (B) When a 3 Â 3 averaging filter is applied to the image, the extreme intensity values are significantly reduced but the image is smoothed in the vicinity of the boundary. (C) In contrast, a 3 Â 3 median filter removes the extreme intensity values while preserving the true nature of the boundary.
  • 12. 296 Richard A. Cardullo and Edward H. HinchcliVe degrade an image, the median filter preserves edges better than the averaging filter since all values with the digital mask are used to compute the mean. Further, averaging filters are seldom used to remove intensity spikes because the spikes themselves contribute to the new intensity value in the processed image and therefore the resultant image is blurred. The median filter is more desirable for removing infrequent intensity spikes from an image since those intensity values are always removed from the processed image once the median is computed. In this case, any spike is replaced with the median value within the digital mask, which gives a more uniform appearance to the processed image. Hence, a uniform background that contains infrequent intensity spikes will look absolutely uniform in the processed image. Since the median filter preserves edges (a sharpening filter), it is often used for high-contrast images (Fig. 8). Fig. 8 EVects of median sharpen and smooth filters on image contrast. A dividing mammalian cell expressing GFP-a tubulin was imaged using a spinning disk confocal microscope. After collection, separate digital filters were applied to the image.
  • 13. 14. Digital Manipulation of Brightfield and Fluorescence Images 297 V. Contrast Enhancement One of the most common uses of image processing is to digitally enhance the contrast of the image using a number of diVerent methods (Castleman, 1979). In brightfield modes such as phase contrast or diVerential interference contrast, the addition of a camera and an image processor can significantly enhance the contrast so that specimens with inherent low contrast can be observed. Addition- ally, contrast routines can be used to enhance an image in a particular region, which may allow the investigator to quantify structures or events not possible with the microscope alone. This is the basis for VEDIC, which allows, for example, the motion of low-contrast specimens such as microtubules or chromosomes to be quantified (Chapter 16 by Salmon and Tran, this volume). In order to optimize contrast enhancement digitally, it is imperative that the microscope optics and the camera be adjusted so that the full dynamic range of the system is utilized. This is discussed further in Chapter 17 by Wolf et al., this volume. The gray values of the image and background can then be displayed as a histogram (Fig. 9) and the user is then able to adjust the brightness and contrast within a particular region of the image. Within a particular gray value range, the user can then stretch the histogram so that values within that range are spread out over a diVerent range in the processed image. Although this type of contrast enhancement is artificial, this allows the user to discriminate features which otherwise may not have been detectable by eye in the original image. Stretching gray values over a particular range in an image is one type of mathematical manipulation, which can be performed on a pixel-by-pixel basis. In general, any digital image can be mathematically manipulated to produce an image with diVerent gray values. The user-defined function that transforms the original function is known as the image transfer function (ITF), which specifies the value and the mathematical operation that will be performed on the original image. This type of operation is a point operation, which means that the output gray value of the ITF is dependent only on the input gray value on a pixel-by-pixel basis. The gray values of the processed image, I2, are therefore transformed at every pixel location relative to the original image using the same ITF. Hence, every gray value in the processed image is transformed according the generalized relationship: GV2 ¼ f ðGV1 Þ where GV2 is the gray value at every pixel location in the processed image, GV1 is the input gray value of the original image, and f(GV1) is the ITF acting on the original image. The simplest type of ITF is a linear equation of slope m and intercept b: GV2 ¼ mGV1 þ b
  • 14. 298 Richard A. Cardullo and Edward H. HinchcliVe A B 3000 3000 2500 2500 Number of pixels Number of pixels 2000 2000 1500 1500 1000 1000 500 500 0 0 0 50 100 150 200 250 0 50 100 150 200 250 Gray value Gray value C 3000 2500 Number of pixels 2000 1500 1000 500 0 0 50 100 150 200 250 Gray value Fig. 9 Histogram representation of gray values for an entire image. (A) The image contains two distributions of intensity over the entire gray value range (0–255). (B) The lower distribution can be removed either through subtraction (if lower values are due to a uniform background) or by applying the appropriate ITF which assigns a value of 0 to all input pixels having a gray value less than 100. The resulting distribution contains only information from input pixels with a value greater than 100. (C) The histogram of the higher distribution can be stretched to fill the lower gray values resulting in a lower mean value than the original. In this case, the digital contrast of the processed image is linearly transformed with the brightness and contrast determined by both the value of the slope and intercept chosen. In the most trivial case, choosing values of m ¼ 1 and b ¼ 0 would leave all gray values of the processed image identical to the original image (Fig. 10A). Raising the value of the intercept while leaving the slope unchanged would have the eVect of increasing all gray values by some fixed value (identical to increasing the DC or black level control on a camera). Similarly, decreasing the value of the intercept will produce a darker image than the original. The value of the slope is known as the contrast enhancement factor and changes in the value of m will have significant eVects on how the gray values are distributed in an image. A value of m 1 will have the eVect of spreading out the gray values over a wider range in the processed image relative to the original image. Conversely, values of m 1 will reduce the number of gray values used to describe a processed image
  • 15. 14. Digital Manipulation of Brightfield and Fluorescence Images 299 A 250 250 200 200 Gray value 150 150 GV2 100 100 50 0 50 0 B 250 250 200 200 Gray value 150 150 GV2 100 100 50 50 0 0 C 250 250 200 200 Gray value 150 150 GV2 100 100 50 50 0 0 0 10 20 30 40 50 60 0 50 100 150 200 250 Pixel number GV1 Fig. 10 Application of diVerent linear ITFs to a low-intensity, low-contrast image. (A) Intensity line scan through an object which is described by few gray values. Applying a linear ITF with m ¼ 1 and b ¼ 0 (right) will result in no change from the initial image. (B) Applying a linear ITF with m ¼ 5 and b ¼ 0 (right) leads to significant improvement in contrast. (C) Applying a linear ITF with m ¼ 2 and b ¼ 50 (right) slightly improves contrast and increases the brightness of the entire image. relative to the original (Fig. 10). As noted by Inoue (1986), although linear ITFs can be useful, the same eVects can be best achieved by properly adjusting the camera’s black levels and gain controls. However, this may not always be practical if conditions under the microscope are constantly changing or if this type of contrast enhancement is needed after the original images are stored.
  • 16. 300 Richard A. Cardullo and Edward H. HinchcliVe The ITF is obviously not restricted to linear functions, and nonlinear ITFs can be extremely useful for enhancing particular features of an image while eliminating or reducing others (Fig. 11). Nonlinear ITFs are also useful for correcting sources of nonlinear response in an optical system or to calibrate the light response of an A 250 250 200 200 Gray value 150 150 GV2 100 100 50 0 50 0 B 250 180 160 200 140 Gray value 120 150 GV2 100 100 80 60 50 40 20 0 C 250 250 200 200 Gray value 150 150 GV2 100 100 50 50 0 0 0 10 20 30 40 50 60 0 50 100 150 200 250 Pixel number GV1 Fig. 11 Application of diVerent nonlinear ITFs to the same low-intensity, low-contrast image in Fig. 8. (A) Initial image and ITF (right) resulting in no change. (B) Application of a hyperbolic ITF (right) to the image results in amplification of lower input values and only slightly increases the gray values for higher input values. (C) Application of a Gaussian ITF (right) to the image results in amplification of low values, with an oVset, and minimizes input values beyond 100.
  • 17. 14. Digital Manipulation of Brightfield and Fluorescence Images 301 optical system (Inoue, 1986). The actual form of the ITF, being linear or nonlinear, is generally application dependent and user defined. For example, nonlinear ITFs that are sigmoidal in shape are useful for enhancing images, which compress the contrast in the center of the histogram and increase contrast in the tail regions of the histogram. This type of enhancement would be useful for images where most of the information about the image is in the tails of the histogram while the central portion of the histogram contains mostly background information. One type of nonlinear ITF, which is sigmoidal in shape, will enhance an 8-bit image of this type is given by the equation: 128 GV2 ¼ ½ðb À cÞa À ðb À GV1 Þa þ ðGV1 þ cÞa Š ðb À cÞa where b and c are the maximum and minimum gray values for the input image, respectively and are arbitrary contrast enhancement factors (Inoue, 1986). For values of a ¼ 1, this normally sigmoidal ITF becomes linear with a lope of 256(b À c). As a increases beyond 1, the ITF becomes more sigmoidal in nature with greater compression occurring at the middle gray values. In practice, ITFs are generally calculated in memory using a lookup table (LUT). An LUT represents the transformation that is performed on each pixel on the basis of that intensity value (Figs. 12 and 13). In addition to LUTs, which 1.0 A 0.8 Input light intensity 0.6 B 0.4 0.2 C D 0.0 0 50 100 150 200 250 Output gray value Fig. 12 Some diVerent gray value LUTs used to alter contrast in images. (A) Inverse LUT, (B) logarithmic LUT, (C) square root LUT, (D) square LUT, and (E) exponential LUT. Pseudo- color LUTs would assign diVerent colors instead of gray values.
  • 18. 302 Richard A. Cardullo and Edward H. HinchcliVe A B C D Fig. 13 DiVerent LUTs applied to the image of a check cell. (A) No filter, (B) reverse contrast LUT, (C) square root LUT, (D) pseudo-color LUT.
  • 19. 14. Digital Manipulation of Brightfield and Fluorescence Images 303 perform particular ITFs, LUTs are also useful for pseudo-coloring images where particular user-defined colors represent gray values in particular ranges. This is particularly useful in techniques such as ration imaging where color LUTs are used to represent concentrations of Ca2þ, pH, or other ions, when various indicator dyes are employed within cells. VI. Transforms, Convolutions, and Further Uses for Digital Masks In the previous sections, the most frequently used methods for enhancing contrast and reducing noise using temporal methods, simple arithmetic opera- tions, and LUTs were described. However, more advanced methods are often needed to extract particular features from an image which may not be possible using these simple methods (Jahne, 1991). In this section, some of the concepts and applications associated with transforms and convolutions will be introduced. A. Transforms Transforms take an image from one space to another. Probably, the most used transform is the Fourier transform which takes one from coordinate space to spatial frequency space (see Chapter 2 by Wolf, this volume for a discussion of Fourier transforms). In general, a transform of a function in one dimension has the form: X TðuÞ ¼ f ðxÞgðx; uÞ x where T(u) is the transform of f(x) and g(x,u) is known as the forward transformation kernel. Similarly, the inverse transform is given by the relation: X f ðxÞ ¼ TðuÞhðx; uÞ u where h(x,u) is the inverse transformation kernel. In two dimensions, these transformation pairs simply become: XX Tðu; vÞ ¼ f ðx; yÞgðx; y; u; vÞ x y XX f ðx; yÞ ¼ Tðu; vÞhðx; y; u; vÞ u v It is the kernel functions that provide the link, which brings a function from one space to another. The discrete forms shown above suggest that these operations can be performed on a pixel-by-pixel basis and many transforms in image processing
  • 20. 304 Richard A. Cardullo and Edward H. HinchcliVe are computed in this manner (known as a discrete Fourier transform or DFT). However, DFTs are generally approximated using diVerent algorithms to yield a fast Fourier transform, or FFT. In the Fourier transform, the forward transformation kernel is: 1 À2piux gðx; uÞ ¼ e N and the reverse transformation kernel is: 1 þ2piux f ðx; uÞ ¼ e N Hence, a Fourier transform is achieved by multiplying a digitized image pixel-by- pixel whose gray value is given by f(x,y) by the forward transformation kernel given above. Transforms, and in particular Fourier transforms, can make certain mathematical manipulations of images considerably easier than if they were performed in coordinate space directly. One example where conversion to frequency space using an FFT is useful is in identifying both high- and low-frequency components on an image that allows one to make quantitative choices about information that can be either used or dis- carded. Sharp edges and many types of noise will contribute to the high-frequency content of an image’s Fourier transform. Image smoothing and noise removal can therefore be achieved by attenuating a range of high-frequency components in the transform range. In this case, a filter function, F(u,v), is selected that eliminates the high-frequency components of that transformed image, I(u,v). The ideal filter would simply cut oV all frequencies about some threshold value, I0 (known as the cutoV frequency): F ðu; vÞ ¼ 1 if jIðu; vÞj I0 F ðu; vÞ ¼ 0 if jIðu; vÞj I0 The absolute value brackets refer to the fact that these are zero-phase shift filters because they do not change the phase of the transform. A graphical representation of an ideal low-pass filter is shown in Fig. 14. Just as an image can be blurred by attenuating high-frequency components using a low-pass filter, so they can be sharpened by attenuating low-frequency components (Fig. 14). In analogy to the low-pass filter, an ideal high-pass filter has the following characteristics: F ðu; vÞ ¼ 0 if jIðu; vÞj I0 F ðu; vÞ ¼ 1 if jIðu; vÞj I0 Although useful, Fourier transforms can be computationally intense and are still not routinely used in most microscopic applications of image processing. A mathematically related technique known as convolution, which utilizes digital
  • 21. 14. Digital Manipulation of Brightfield and Fluorescence Images 305 Low pass High pass A B F (u,v ) F (u,v ) 1 1 0 I (u,v ) 0 I(u,v ) I0 I0 Fig. 14 Frequency domain cutoV filters. The filter function in frequency space, F(u,v), is used to cut oV all frequencies above or below some cutoV frequency, I0. (A) A high-pass filter attenuates all frequencies below I0 leading to a sharpening of the image. (B) A low-pass filter attenuates all frequencies above I0 which eliminates high-frequency noise but leads to smoothing or blurring of the image. masks to select particular features of an image, is the preferred method of microscopists since many of these operations can be performed at faster rates and perform the mathematical operation in coordinate space instead of frequency space. These operations are outlined in Section VI.B. B. Convolution The convolution of two functions, f(x) and g(x), is given mathematically by: Z þ1 f ðaÞgðx À aÞda À1 where a is a dummy variable of integration. It is easiest to visualize the mechanics of convolution graphically as demonstrated in Fig. 15, which, for simplicity, shows the convolution for two square pulses. The convolution can be broken down into three simple steps: 1. Before carrying out the integration, reflect g(a) about the origin, yielding g(Àa) and then displace it by some distance x to give g(x À a). 2. For all values of x, multiply f(a) by g(x À a). The product will be nonzero at all points where the functions overlap. 3. Integrating this product yields the convolution between f(x) and g(x). Hence, the properties of the convolution are determined by the independent function f(x) and a function g(x) that selects for certain desired details in the function f(x). The selecting function g(x) is therefore analogous to the forward transformation kernel in frequency space except that it selects for features in coordinate space instead of frequency space. This clearly makes the convolution an important image-processing technique for microscopists who are interested in feature extraction.
  • 22. 306 Richard A. Cardullo and Edward H. HinchcliVe A f (x) B g(x) 2 ϫ 1 x x 0 1 0 1 C f (x) g(x) 2 x 0 2 Fig. 15 Graphical representation of one-dimensional convolution. (A) In this simple example, the function, f(x), to be convolved is a square pulse of equal height and width. (B) The convolving function, g(x), is a rectangular pulse which is twice as high as it is wide. The convolving function is then reflected and is then moved from À1 to þ1. (C) In all areas where there is no overlap, the products of f(x) and g(x) is zero. However, g(x) overlaps f(x) in diVerent amounts from x ¼ 0 to x ¼ 2 with maximum overlap occurring at x ¼ 1. The operation therefore detects the trailing edge of f(x) at x ¼ 0 and the convolution results in a triangle which increases in height from 0 to 2 for 0 x 1 and decreases in height from 2 to 0 for 1 x 2. One simple application of convolutions is the convolution of a function with an impulse function (commonly known as a delta function), d(x À x0): Z þ1 f ðxÞdðx À x0 Þdx ¼ f ðx0 Þ À1 For our purposes, d(x À x0) is located at x ¼ x0 and the intensity of the impulse is determined by the value f(x) at x ¼ x0 and is zero everywhere else. In this example, we will let the kernel g(x) represent three impulse functions separated by a period, t: gðxÞ ¼ dðx þ tÞ þ dðxÞ þ dðx À tÞ As shown in Fig. 16, the convolution of the square pulse f(x) with these three impulses results in a copying of f(x) at the impulse points.
  • 23. 14. Digital Manipulation of Brightfield and Fluorescence Images 307 A f(x) B g(x ) ϫ x x −t 0 +t C f (x) g (x) x −t 0 +t Fig. 16 Using a convolution to copy an object. (A) The function f(x) is a rectangular pulse of amplitude, A, with its leading edge at x ¼ 0. (B) The convolving functions g(x) are three delta functions at x ¼ Àt, x ¼ 0, and x ¼ þt. (C) The convolution operation f(x)g(x) results in copying of the three rectangular pulses at x ¼ Àt, x ¼ 0, and x ¼ þt. As with Fourier transforms, the actual mechanics of convolution can rapidly become computationally intensive for a large number of points. Fortunately, many complex procedures can be adequately performed using a variety of digital masks as illustrated in Section VI.C. C. Digital Masks as Convolution Filters For many purposes, the appropriate digital mask can be used to extract features from images. The convolution filter, acting as a selection function g(x), can be used to modify images in a particular fashion. Convolution filters reas- sign intensities by multiplying the gray value of each pixel in the image by the corresponding values in the digital mask and then summing all the values; the resultant is then assigned to the center pixel of the new image and the operation is then repeated for every pixel in the image (Fig. 17). Convolution filters can vary in size (i.e., 3 Â 3, 5 Â 5, 7 Â 7, and so on) depending on the type of filter chosen and the relative weight that is required from neighboring values from the center pixel.
  • 24. 308 Richard A. Cardullo and Edward H. HinchcliVe Fig. 17 Performing convolutions using a digital mask. The convolution mask is applied to each pixel in the image. The value assigned to the central pixel results from multiplying each element in the mask by the gray value in the corresponding image, summing the result, and assigning the value to the corresponding pixel in a new image buVer. The operation is repeated for every pixel resulting in the processed image. Or diVerent operations, a scalar multiplier and/or oVset may be needed. For example, consider a simple 3 Â 3 convolution filter, which has the form: 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9 1/9
  • 25. 14. Digital Manipulation of Brightfield and Fluorescence Images 309 Applied to a pixel with an intensity of 128 and surrounded by other intensity values as follows: 123 62 97 237 128 6 19 23 124 The gray value in the processed image at that pixel, therefore, would have a new value of 1/9(123 þ 62 þ 97 þ 237 þ 128 þ 6 þ 19 þ 2 þ 124) ¼ 819/9 ¼ 91. Note that this convolution filter is simply an averaging filter identical to the operation described in Section IV (in contrast, a median filter would have returned a value of 128). A 5 Â 5 averaging filter would simply be a mask, which contains 1/25 in each pixel whereas a 7 Â 7 averaging filter would contain 1/49 in each pixel. Since the speed of processing decreases with the size of the digital mask, the most frequently used filters are 3 Â 3 masks. In practice, the values found in the digital masks tend to be integer values with a divisor that can vary depending on the desired operation. In addition, because many operations can lead to resultant values, which are negative (since the values in the convolution filter can be negative), oVset values are often used to prevent this from occurring. In the example of the averaging filter, the values in the kernel would be: 1 1 1 1 1 1 1 1 1 with a divisor value of 9 and an oVset of zero. In general, for an 8-bit image, divisors and oVsets are chosen so that all processed values following the convolu- tion fall between 0 and 255. Understanding the nature of convolution filters is absolutely necessary when using the microscope as a quantitative tool. User-defined convolution filters can be used to extract information specific for a particular application. When begin- ning to use these filters, it is important to have a set of standards, which the filters can be applied to in order to see if the desired eVect has been achieved. In general, the best test objects for convolution filters are simple geometric objects such as squares, grids, isosceles and equilateral triangles, circles, and so on. Many com- mercially available graphics packages provide such test objects in a variety of graphics formats. Examples of some widely used convolution masks are given in the following sections.
  • 26. 310 Richard A. Cardullo and Edward H. HinchcliVe 1. Point Detection in a Uniform Field Assume that an image consists of a series of grains on a constant background (e.g., a dark-field image of a cellular autoradiogram). The following 3 Â 3 mask is designed to detect these points: À1 À1 À1 À1 þ8 À1 À1 À1 À1 When the mask encounters a uniform background, then the gray values in the processed center pixel will be zero. If, on the other hand, a value above the constant background is encountered, then its value will be amplified above that background and a high-contrast image will result. 2. Line Detection in a Uniform Field Similar to the point mask in the previous example, a number of line masks can be used to detect sharp, orthogonal edges in an image. These line masks can be used alone or in tandem to detect horizontal, vertical, or diagonal edges in an image. Horizontal and vertical line masks are represented as: À1 À1 À1 þ2 þ2 þ2 À1 À1 À1 À1 þ2 À1 À1 þ2 À1 À1 þ2 À1 whereas, diagonal line masks are given as: À1 À1 þ2 À1 þ2 À1 þ2 À1 À1 þ2 À1 À1 À1 þ2 À1 À1 À1 þ2
  • 27. 14. Digital Manipulation of Brightfield and Fluorescence Images 311 In any line mask, the direction of nonpositive values used reflects the direction of the line detected. When choosing the type of line mask to be utilized, the user must a priori know the directions of the edges to be enhanced. 3. Edge Detection-Computing Gradients Of course, lines and points are seldom encountered in natures and another method for detecting edges would be desirable. By far, the most useful edge detection procedure is one that picks up any inflection point in intensity. This is best achieved by using gradient operators, which take the first derivative of light intensity in both the x- and y-directions. One type of gradient convolution filters, which are often used, is the Sobel filter. An example of a Sobel filter, which calculates horizontal edges, is the Sobel North filter expressed as the following 3 Â 3 kernel: þ1 þ2 þ1 0 0 0 À1 À2 À1 This filter is generally not used alone, but is instead used along with the Sobel East filter, which is used to detect vertical edges in an image. The 3 Â 3 kernel for this filter is: À1 0 þ1 À2 0 þ2 À1 0 þ1 These two Sobel filters can be used to calculate both the angle of edges in an image and the relative steepness of intensity (i.e., the derivative of intensity with respect to position) of that image. The so-called Sobel Angle filter returns arctangent of the ratio of the resultant Sobel North filtered pixel value to the Sobel East filtered pixel value while the Sobel Magnitude filter calculates a resultant value from the square root of the sum of the squares of the Sobel North and Sobel East values. In addition to Sobel filters, a number of diVerent gradient filters can be used (specifically Prewitt or Roberts gradient filters) depending on the specific applica- tion. Figure 18 shows the design and outlines the basic properties of these filters, and Fig. 19 shows the eVects of these filters on a fluorescence micrograph.
  • 28. 312 Richard A. Cardullo and Edward H. HinchcliVe Name Kernels Uses −1 +1 +1 Detects the vertical edges of Gradient −1 2 +1 objects in an image. −1 +1 +1 North detects horizontal edges; East detects vertical edges. North and East +1 +2 +1 −1 0 +1 used to calculate Sobel Angle and Sobel 0 0 0 −2 0 +2 Sobel Magnitude (see test). Filters should not be used independently; if −1 −2 −1 −1 0 +1 horizontal or vertical detection is desired, use Prewitt. North East +1 +1 +1 −1 0 +1 North detects horizontal edges; Prewitt 0 0 0 −1 0 +1 East detects vertical edges. −1 −1 −1 −1 0 +1 North East Northeast detects diagonal 0 1 1 0 edges from top-left to bottom- Roberts right; Northwest detects −1 0 0 −1 diagonal edges from top-right to Northeast Northwest bottom-left. Fig. 18 DiVerent 3 Â 3 gradient filters used in imaging. Shown are four diVerent gradient operators and their common uses in microscopy and imaging. 4. Laplacian Filters Laplacian operators calculate the second derivative of intensity with respect to position and are useful for determining whether a pixel is on the dark side or light side of an edge. Specifically, the Laplace-4 convolution filter, given as: 0 À1 0 À1 þ4 À1 0 À1 0 detects the light and dark sides of an edge in an image. Because of its sensitivity to noise, this convolution mask is seldom used by itself as an edge detector. In order to keep all values of the processed image within 8 bits and positive, a divisor of 8 and an oVset value of 128 are often employed.
  • 29. 14. Digital Manipulation of Brightfield and Fluorescence Images 313 Fig. 19 DiVerent filters applied to a fluorescence image of a dividing mammalian cell. Inverse contrast LUT, gradient filter, Laplacian filter, Sobel filter. The point detection filter shown earlier is also a kind of Laplace filter (known as the Laplace-8 filter). This filter uses a divisor value of 16 and an oVset value of 128. Unlike the Laplace-4 filter, which only enhances edges, the Laplace-8 filter enhances edges and other features of the object. VII. Conclusions The judicious choice of image-processing routines can greatly enhance an image and can extract features, which are not otherwise possible. When applying digital manipulations to an image, it is imperative to understand the routines that are being employed and to make use of well-designed standards when testing them out. With the advent of high-speed digital detectors and computers, near real-time processing involving moderately complicated routines is now possible.
  • 30. 314 Richard A. Cardullo and Edward H. HinchcliVe References Andrews, H. C., and Hunt, B. R. (1977). ‘‘Digital Image Restoration.’’ Prentice-Hall, Englewood CliVs, NJ. Bates, R. H. T., and McDonnell, M. J. (1986). ‘‘Image Restoration and Construction.’’ Oxford University Press, New York, NY. Cardullo, R. A. (1999). Electronic and computer image enhancement in light microscopy. In ‘‘Encyclope- dia of Life Sciences.’’ Wiley Sons, Hoboken, NJ. Castleman, K. R. (1979). ‘‘Digital Image Processing.’’ Prentice-Hall, Englewood CliVs, NJ. Chellappa, R., and Sawchuck, A. A. (1985). ‘‘Digital Image Processing and Analysis.’’ IEEE Press, New York, NY. Erasmus, S. J. (1982). Reduction of noise in a TV rate electron microscope image by digital filtering. J. Microsc. 127, 29–37. Gonzalez, R. C., and Wintz, P. (1987). ‘‘Digital Image Processing.’’ Addison-Wesley, Reading, MA. Green, W. B. (1989). ‘‘Digital Image Processing: A Systems Approach.’’ Van Nostrand Reinhold, New York, NY. Inoue, S. (1986). ‘‘Video Microscopy.’’ Plenum, New York, NY. Inoue, S., and Spring, K. R. (1997). ‘‘Video Microscopy,’’ 2nd edn. Plenum, New York, NY. Jahne, B. (1991). ‘‘Digital Image Processing.’’ Springer-Verlag, New York, NY. Pratt, W. K. (1978). ‘‘Digital Image Processing.’’ Wiley, New York, NY. Russ, J. C. (1990). ‘‘Computer-Assisted Microscopy. The Measurement and Analysis of Images.’’ Plenum, New York, NY. Russ, J. C. (1994). ‘‘The Image Processing Handbook.’’ CRC Press, Ann Arbor, MI. Shotton, D. (1993). ‘‘Electronic Light Microscopy: Techniques in Modern Biomedical Microscopy.’’ Wiley-Liss, New York, NY.