2. visual image interpretation techniques have
certain disadvantages and may require extensive
training and are labor intensive
here, the spectral characteristics are not fully
evaluated because of the limited ability of the eye to
discern tonal values and analyze the spectral changes
if the data are in digital mode, the remote sensing
data can be analysed using digital image processing
techniques and such a database can be used in raster
GIS
3. in applications where spectral patterns are more
informative, it is preferable to analyse digital data
rather than pictorial data
the use of computer-assisted analysis techniques
permits the spectral patterns in remote sensing
data to be more fully examined
it also permits the data analysis process to be
largely automated, providing cost advantages over
visual interpretation techniques
4. however, computers are also limited in their ability
to interpret spatial patterns, therefore visual and
numerical techniques complement each other
6. remote sensing images in their raw form as
received from remote sensors may be distorted or
contain deficiencies diminish the accuracy of the
information extracted and reduce the utility of the
data
the correction of deficiencies and removal of flaws
present in the data through some methods are
termed as pre-processing which is usually required
prior to image interpretation and analysis
7. the type of preprocessing required for an image
depends on the quality of the image and the
purpose of the use of the data, and varies widely
among sensors
1) Geometric correction
2) Radiometric correction
3) Atmospheric correction
8. the transformation of a remotely sensed image
into a map with a scale and projection properties
remove geometric distortions caused by several
factors
georeference the image to a particular projected
map coordinate system
Type of errorSources of error
altitude, attitude, scan-skew mirror,
scan velocity
Platform instability
earth rotation, map projectionScene effect
Mirror sweepSensor effect
panorama, perspectiveScene and sensor
effect
9. the pixel value in the corrected image must be
recalculated based on the pixel values surrounding
the transformed position in the original image
three methods: nearest neighbor, bilinear
interpolation and cubic convolution
10.
11. environmental monitoring often requires the
comparison of images taken at different times or
geographical locations
the radiance measured by a remote sensor over a
particular feature is affected by changes in scene
illumination, atmospheric conditions, viewing
geometry, sensor response properties and other
factors
12. data must be corrected so it accurately represent
the reflected or emitted radiation measured by the
sensor
the DN have been directly used to statistically
classify cover types, identify terrain features,
mosaic images or rationing
the results of such analyses are questionable
because the digital numbers do not quantitatively
represent any real physical feature
13. it is therefore necessary to first convert the digital
data (digital numbers ) into physically meaningful
values such as radiance and reflectance so that
they can be used in further analysis
one of the most important radiometric data
processing activity involved in many quantitative
applications
14. B1. DN-to-radiance conversion
pixel values (DN values) in an image are usually
a linear transformation of the physical quantity of
spectral radiance measured by the sensor to fit
into the range of bits e.g. 8-bit or 12-bit
15. radiance is a measure of the radiant energy given
out by an object and measured by a remote sensor
spectral radiance (L) is defined as the energy
within a wavelength band radiated by a unit area
per unit solid angle of measurement
16. radiance depends on the illumination (both its
intensity and direction), the orientation and position
of the target and the path of the light through the
atmosphere
DN -to -radiance conversion is useful
when comparing the actual radiance measured
by different sensors e.g. ETM+ versus OLI
when establishing quantitative relationships
between image data and ground measurements e.g.
water quality and plant biomass data
17. B2. Radiance-to-reflectance conversion
reflectance is the ratio of the amount of light
leaving a target to the amount of light striking the
target
oit is a property of the material being observed
reflectance is defined by the following formula:
where,
E = irradiance in mW/cm2 at the top of atmosphere,
α= solar elevation angle available in the header file
of CCT
18. as reflectance is often used in extracting
biophysical information, such as deriving vegetation
index values (e.g. NDVI), it is useful to convert the
spectral radiance measured by the sensor to the
apparent reflectance or TOA (Top of the
Atmosphere) planetary reflectance
TOA reflectance is the total spectral reflectance at
the sensor from both target and atmosphere
19. the main advantage of this conversion is to adjust
the images to a theoretically common illumination
condition, so there should be less variation between
scenes from different dates and from different
sensors
useful for image classification, color balancing,
and mosaicking
the process involves two steps:
a)convert the DN value to the TOA radiance based
on the sensor properties (i.e. gain/bias or
LMAX/LMIN)
b)convert the TOA radiance to TOA reflectance
based on Sun elevation and acquisition date
20. apparent reflectance is a ratio and its native
output range is 0-1, but for display purposes the
ratio is multiplied by 255, so the output is stretched
from 0-1 to 0-255
21. B3. Cosmetic Operations
includes 2 topics:
1) the correction of digital images containing either
partially or entirely missing scan lines (line drop)
it is overcome by replacing the zero value by
the mean values of the pixels of the previous and
the following line
2) the correction of images because of destripping of
the imagery because sometimes detector
recorded irradiance (reflected) for the same
object may differ
23. B4. Random Noise Removal
image noise is any unwanted disturbance in image
data that is due to limitations in the sensing and data
recording process
characterised by nonsystematic variations in gray
levels from pixel to pixel called bit errors
such a noise is often referred to as being 'spiky'
in character and it causes images to have a 'salt
and pepper' or snowy appearance
noise can be identified by comparing each pixel in
an image with its neighbors
24. if the difference between a given pixel value and
its surrounding values exceeds an analyst specified
threshold the pixel is assumed to contain noise
the noisy pixel value can then be replaced by the
average of its neighboring values
25. images taken at different times of year or times
of day are likely illuminated by the Sun at different
angles
the solar
elevation angle
decreases from
summer to
winter for the
same sensor and
the same
location
B5. Sun elevation correction
26. therfore, the Sun elevation correction is applied
involves normalising the images acquired
under different solar elevation angles to the
zenith using the equation:
Lλ is the corrected radiance value
α is the sun elevation angle
27. scattering effect increases the signal value (bias)
the presence of haze, fog, or atmospheric
scattering, there always exists some kind of
unwanted signal value called bias
if the data is free
from atmospheric
scattering, the best
fitting line should
pass through the
origin, which is
usually not the case
28. image registration is the exact pixel-to-pixel
matching of two different images or matching of one
image to a map
rectification is the process by which the
geometry of an image area is made planimetric
involves relating GCP pixel coordinates (row
and column) with map coordinate counterparts
each pixel is referenced in degrees or meters in
a standard map projection
29. whenever accurate data, direction and distance
measurements are required. geometric rectification
is required
30. the major causes of low contrast of the image:
low sensitivity of the detectors, weak signal of
the objects present on the earth surface, similar
reflectance of different objects and environmental
conditions at the time of recording
the human eye is poor at discriminating the slight
radiometric or spectral differences that may
characterize the features
3. Image enhancement Techniques
31. the main aim of digital enhancement is to amplify
these slight differences for better clarity of the
image scene for specific applications i.e. it increases
the separability (contrast) between the interested
classes or features
32. Examples of image enhancement operations:
-band combinations -pan-sharpening
-contrast stretching -spatial filtering
-rationing -PCA
broadly, the enhancement techniques are
categorized as point operations and local operations
33. 3.1 Band combination
different false color composites may be suitable
for identifying different types of objects:
a false color composite
made of mid-infrared, near
infrared and green bands
may help differentiate
different forest types and
discern difference in soil
moisture content
34. 3.2 Pan-sharpening
increasing the spatial resolution of a multispectral
image with a higher-resolution panchromatic image
uses spatial information in the higher resolution
panchromatic band and spectral information in the
lower-resolution multispectral bands to produce a
high resolution multiband image
35. 3.3 Contrast stretching
enlarging the tonal distinction between different
features
improve the contrast in an image by expanding the
narrow range of brightness (DN) values in the image
over a wider range of values or over the entire
brightness range of the display medium (such as
computer screens)
36. the distribution of DN values in a histogram of remote
sensing imagery is often unimodal
37.
38. 3.4 Spatial filtering
using spatial filters to detect, sharpen (enhance)
or smooth (suppress) specific features in an image
based on their spatial frequency
spatial frequency refers to the frequency of
change in DN values per unit distance along a
particular direction in the image
images with high spatial frequency are having
areas where changes occur in very close
proximity, while when changes occur over large
distances have low spatial frequencies
39. a scene with small details and sharp edges
contains more high spatial frequency information
than one composed of large coarse features
in spatial filtering operation each pixel is changed
by a function of the DN values of pixels in its
neighborhood
it calculates the focal sum statistic for each
pixel of the input image using a weighted
kernel
40. a kernel is an array of coefficients or weights of a
few pixels in dimension (e.g. 3x 3 or 5x 5) usually
used as a moving window
spatial filtering operation moves pixel by pixel,
multiplies the pixel values within the neighborhood
by the corresponding coefficients or weights in the
kernel, sums all the resulting products and replaces
the central pixel by the sum
the calculation is repeated until the entire image
is filtered and a new image is generated
42. highlight low spatial frequency features, reduce
the local variation and generally serve to smooth the
image
43. enhance high spatial frequency features, increase
smaller detail and generally serve to sharpen the image
44. edge detection filters enhance and delineate
linear features such as roads, linear geological
structures and boundaries of area features
different from high pass filters, they preserve both
low- and high-frequency components of an image
45. the derivation of new image by means of two or
more band combinations based on the arithmetic
operations, mathematical statistics and fourier
transformations
the resulting image may well have properties
that make it more suited to a particular purpose
than the original
46. 4.1 Band ratioing
divide pixel values in one spectral band by the
corresponding values in another band on a pixel-by-
pixel basis
ratio images tend to carry the 'true' spectral
characteristics of features that are not affected by
variations in scene illumination conditions caused
by topographical slope, aspect, shadows or
seasonal changes
47. Examples:
a ratio image of R/NIR, would produce ratios much
smaller than 1.0 for vegetation, and ratios around 1.0
for soil and water
a ratio of mid-IR band (e.g, ETM+ Band 5) and
green band (e.g., ETM+, Band 2) may be used to
determine moisture contents ratio image
48.
49. grey scale from
band ratio 5/7
(Landsat ETM+)
showing clay
mineralization in
white pixels, and
vegetation
appears white
along the
drainages
50. a ratio of red band (e.g., ETM+, Band3 ) and mid-
IR band (e.g., ETM+ Band7) may reveal differences
in water turbidity
a ratio of red band and blue band may help in the
detection of ferric iron-rich rocks
in general, the weaker the correlation between
the bands, the higher the information content of
the ratio image
51. band ratio values generally vary considerably
from one region to another or from one season
to another, which makes comparisons across
regions or overtime rather difficult
therefore, more complex ratios have been
developed in order to overcome these difficulties
Example: NDVI (Normalized Difference
Vegetation Index) is widely used for studying
vegetation dynamics, assessing biomass,
estimating crop yields, monitoring drought and
predicting hazardous fire zones
52. the NDVI is bounded ratio that ranges between -1
to +1
clouds, water and snow have negative NDVI
since they are more reflective in visible than near IR
wave lengths
soil and rock have a broadly similar reflectance
giving NDVI close to '0‘
active vegetation has a positive NDVI, typically
between 0.1 and 0.6 indicating increased
photosynthetic activity and a greater density of the
canopy
53.
54. 4.2 Principal component analysis
various wavelength bands in multispectral image
data may appear similar and contain the same
information due to similarities of the spectral
response of the observed features in those bands
such multispectral images are said to be highly
correlated inter-band correlation leads to
redundancies in the data that can be reduced by
Principal Components Analysis (PCA)
55. PCA identify similarities and differences within a
dataset and transform a correlated dataset to a
new data set without correlations
thus, PCA help in producing images that are
more interpretable than original ones and increase
the computational efficiency of subsequent image
analysis
56. the first principal component in the data (PC1)
represents the direction where there is the most
variance and where the data are most spread out
the second principal component in the data (PC2) is
the line perpendicular to PC1, and passing through
the mean of the data distribution
57. PC images with large percentages of the scene
variance provide significant information about the
observed features, while those with low variances
represent noise, and provide little useful
information
the principal component images that contain
most of the variance in the original image data,
means that they explain nearly all of the variance
in the data, and the other PC images can be can be
ignored as they account for a very low percentage
58.
59. image classification involves automat extraction of
different types of ground features from an image
it is a process of categorizing pixels of an
image, normally a multispectral or hyperspectral
image into different classes based on the spectral
information represented by the DN values in one or
more spectral bands
purpose: land use and land cover (LULC), vegetation
types, geologic terrains, mineral exploration
60. the result is a thematic map describing the spatial
distribution of various land cover classes (such as
water, vegetation and soil)
in an easy world, all “Limestone” pixels, for
example, would have exactly the same spectral
signature, then we could just say that any pixel in an
image with that signature is limestone
we’d do the same for soil, etc. and end up with a
map of classes
61.
62.
63. -proportion of the m
classes within a pixel
(e.g., 10% bare soil,
10% shrub, 80% forest)
A- Pixel based
classification:
64. object-oriented classification techniques allow
the analyst to decompose the scene into many
relatively homogenous image objects (referred to
as patches or segments)
the various statistical characteristics of these
homogeneous image objects in the scene are then
subjected to traditional statistical or fuzzy logic
classification
usually used for the analysis of high-spatial-
resolution imagery (e.g. 1m m IKONOS and 0.61 m
QuickBird).
65. an algorithm automatically group
pixels with similar spectral
characteristics (means, standard
deviations, covariance matrices,
correlation matrices, etc.) into
unique clusters according to
some statistically determined
criteria
the analyst then re-labels
and combines the spectral
clusters into information classes
Methods to Classification
66. Digital Image
the analyst requests the computer to examine
the image and extract a number of spectrally
distinct clusters…
Spectrally Distinct Clusters
Cluster 3
Cluster 5
Cluster 1
Cluster 6
Cluster 2
Cluster 4
67. Saved Clusters
Cluster 3
Cluster 5
Cluster 1
Cluster 6
Cluster 2
Cluster 4
Output Classified Image
Unknown
Next Pixel
to be
Classified
68. Conif.
Hardw.
Water
Land Cover Map Legend
Water
Water
Conifer
Conifer
Hardwood
Hardwood
Labels
the analyst determines the ground cover for
each of the clusters…
69. Advantages
requires no prior knowledge of the region
human error is minimized
Disadvantages
classes do not necessarily match informational
categories of interest
limited control of classes and identities
distance measures are used to group or cluster
brightness values together
euclidean distance between points in space is a
common way to calculate closeness
70. involves using pixels of known classes to identify
pixels of unknown classes
it requires an analyst first to choose sample pixels
in the image that are representative of specific
classes based on fieldwork, interpretation of aerial
photographs and large scale maps, personal
knowledge or a combination o f these methods
these samples of known classes are called training
sets, also known as testing sets or input classes
71. a class signature is then generated for the
selected training sets that describe the spectral
characteristics of each class in all spectral bands
each pixel in the image is then evaluated
against each class signature, and assigned to the
class it resembles most
a training set for a particular class is usually
drawn from pixels in multiple areas determined
by the analyst to represent that class, and should
capture both the mean and variability of the class
as a whole
72.
73. the better the training set represents the spectral
variation of the class, the more accurate the
classification results
for an n-band image, a training set for a given
class should contain at least 10*n pixels so that the
spectral response pattern of the class can be
reliably characterized
the number of training sets depends on the
nature of the classes and the complexity of the
study area
74. • training areas are digitized polygons of the
selected pixels
Digital Image
The computer then creates... Mean Spectral
Signatures
Known Conifer
Area
Known Water
Area
Known Deciduous
Area
Conifer
Deciduous
Water
76. Advantages
analyst has control over the selected classes
tailored to the purpose
has specific classes of known identity
can detect serious errors in classification if
training areas are misclassified
Disadvantages
training data are usually tied to informational
categories and not spectral properties
training data selected may not be representative
selection of training data may be time consuming
may not be able to recognize special or unique
categories because they are not known or small
77. uses the range of DN values in each training
set to define sub-spaces, often called decision regions
for each class
a decision region is usually bounded by the
maximum and minimum DN values of each class in
each band calculated from the training set
if an unknown pixel lies in a decision region of a
particular class, it will be assigned to that class e.g.
pixel p
if it is placed outside all decision regions, it remains
unknown
78.
79. •each class type defines a spectral box
•note that some boxes
overlap even though the
classes are spatially
separable
•this is due to band
correlation in some
classes
•can be overcome by
customising boxes
80. the parallelepiped classifier is computationally
efficient, however some decision regions of different
classes may overlap
in such cases , unknown pixels falling in the
overlap regions are classified as 'not sure' or assigned
to one of the overlapping classes
the method may not be very accurate, as the
parallelepipeds are formed based on DN ranges that
may not be representative of a class
81. 22
clijlckijk BVBVDist
• all pixels are classified to
the nearest class unless a
standard deviation or distance
threshold is specified, in
which case some pixels may
be unclassified if they do not
meet the selected criteria
82. applies a probability model to determine the
decision regions
each pixel is evaluated and assigned to the class
of which it has the highest probability of being a
member
it assumes that the DN values of the training
set for each class in each band are normally
distributed, otherwise it can not be applied
83. the probability of a pixel belonging to each of a
predefined set of m classes is calculated based
on a normal probability density function, and
the pixel is then assigned to the class for which
the probability is the highest probability
84. classified images require post-processing to
evaluate classification accuracy and to generalize
classes for export to image-maps and vector GIS
post classification can be used to:
-calculate class statistics and confusion matrices
-apply majority or minority analysis to classification
Images
85.
86. determines the quality of the information
derived from remotely sensed data
accuracy assessment is determined by selecting
a sample of pixels from the thematic map
(classified image) and checking their labels against
classes determined from reference data (ground
truth) desirably gathered during site visits
87. from these checks the percentage of pixels
from each class in the image labeled correctly
by the classifier can be estimated, along with
the proportions of pixels from each class
erroneously labeled into every other class
the results are then expressed in tabular
form, often referred to as a confusion or
error matrix
88.
89. the matrix establishes the level of errors due to
omission (exclusion error), commission (inclusion
error), and can tabulate an overall total accuracy
the error matrix lists the number of pixels found
within a given class
the rows list the pixels classified by the image
software and the columns list the number of pixels
in the reference data (or reported from field data)
omission error calculates the probability of
90. a pixel being accurately classified; it is a
comparison to a reference
commission determines the probability
that a pixel represents the class for which it
has been assigned
the total accuracy is measured by calculating
the proportion correctly classified pixel
relative to the total tested number of pixels
(Total = total correct/total tested).