SlideShare ist ein Scribd-Unternehmen logo
1 von 19
Downloaden Sie, um offline zu lesen
This article was downloaded by: [George Mason University]
On: 23 March 2015, At: 11:57
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Click for updates
International Journal of Image and Data
Fusion
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/tidf20
Radar and optical remote sensing data
evaluation and fusion; a case study for
Washington, DC, USA
Terry Idol
a
, Barry Haack
a
& Ron Mahabir
a
a
Department of Geography and Geoinformation Science, George
Mason University, Fairfax, VA, USA
Published online: 20 Mar 2015.
To cite this article: Terry Idol, Barry Haack & Ron Mahabir (2015): Radar and optical remote sensing
data evaluation and fusion; a case study for Washington, DC, USA, International Journal of Image
and Data Fusion
To link to this article: http://dx.doi.org/10.1080/19479832.2015.1017541
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
Radar and optical remote sensing data evaluation and fusion; a case
study for Washington, DC, USA
Terry Idol, Barry Haack and Ron Mahabir*
Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA, USA
(Received 17 December 2014; accepted 3 February 2015)
The recent increase in the availability of spaceborne radar in different wavelengths
with multiple polarisations provides new opportunities for land surface analysis. This
research effort explored how different radar data, and derived texture values, indepen-
dently and in combination with optical imagery influence land cover/use classification
accuracies for a study site in Washington, DC, USA. Two spaceborne radar images,
Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster
optical data for this study. Traditional methods of classification were applied to various
components and combinations of this data set, and overall and class-specific thematic
accuracies obtained for comparison. The results for the two despeckled radar data sets
were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar
77%, while that of the optical Aster was 90%. Combining the original radar and a
variance texture measure increased the accuracy of Radarsat-2 to 71% but that of
Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy
of 93%. For this location, radar by itself does not obtain classification accuracies as
high as optical data, but fusion with optical imagery provides better overall thematic
accuracy than the optical independently, and results in some useful improvements on a
class-by-class basis. For those regions with high cloud cover, quad polarisation radar
can independently provide viable results but it may be wavelength-dependent.
Keywords: Radarsat-2; Palsar; Aster; quad polarisation; sensor fusion; variance texture;
Washington, DC
1. Introduction
Reliable and up-to-date land cover/use information, both current and changes over time,
represent a fundamental resource for tracking human modification of the earth’s terrestrial
surface. This information is required for making sound decisions on a broad range of
economic and environmental planning and resource management issues, including plan-
ning infrastructure, reducing pollution and expanding food production. Although conven-
tional ground survey methods for collecting land cover/use information still continue to be
used on a much smaller scale, they suffer from several limitations, such as being labour-
intensive and time-consuming with data collected relatively infrequently (Al-Tahir
et al. 2009). Compared to conventional methods, remote sensing offers several advantages
for collecting land cover/use data, including synoptic view, relatively low cost and the
ability to capture changes on the ground in shorter and in a more temporally consistent
manner (Haack et al. 2014). With increasing pressure placed on ecosystems due to
accelerated population growth, urbanisation, migration and economic growth, it is
*Corresponding author. Email: rmahabir@gmu.edu
International Journal of Image and Data Fusion, 2015
http://dx.doi.org/10.1080/19479832.2015.1017541
© 2015 Taylor & Francis
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
expected that remote sensing will continue to be relied on as a sustainable source of
information on land cover/use.
There has been a tremendous increase in the number of spaceborne remote sensors
over the last years. These systems have provided data with a broad range of spatial,
spectral, temporal and radiometric resolutions. With these expansions in data types, there
has been a much wider set of applications for these data and improvements in derived land
cover/use maps and statistical information. For many years, this technology has been
based on optical, typically multispectral, systems such as Landsat and the French Satellite
Pour I’Observation de la Terre (SPOT). More recently, active microwave or radar has
become more available. These radar systems have some significant advantages over
optical systems in their ability to penetrate cloud cover and have night-time capabilities
(Al-Tahir et al. 2014). This provides the opportunity to collect data in areas, such as low-
latitude tropical regions and high latitudes, where it is difficult to obtain data via other
sensors (Henderson et al. 2002, Li et al. 2012). In the Amazon forest of South America,
for example, the likelihood of capturing optical image scenes with cloud cover rates of
30% or less can be as low as 0% per year (Asner 2001). Mahabir and Al-Tahir (2008) also
report similar issues for the Caribbean region using the island of Trinidad as a case study.
In such locations, radar imagery has tremendous potential for both updating and monitor-
ing land cover/use changes.
Spaceborne radar has recently improved greatly from the single wavelength and single
polarisation, in essence one band, which earlier systems provided. Those systems were
very limited in the amount of surface information that could be extracted (Töyrä
et al. 2001, Dell’Acqua et al. 2003).Newer systems, such as the Japanese Phased Array
type L-band Synthetic Aperture Radar (PALSAR), the Canadian RADARSAT-2 and the
European TerraSar-X and Sentinel sensors, collect information from multiple polarisa-
tions, allowing for much more complex processing and analysis and potentially more
useful spatial information (Sawaya et al. 2010, Sheoran and Haack 2013). In addition,
individual radar sensors may function in different microwave portions of the spectrum,
providing opportunities for comparison and integration.
Polarisation, the orientation of the beam relative to the earth’s surface either vertically
or horizontally, is important to remote sensing scientists as each type of polarisation
provides a different type of information. Polarisation can be altered for both the transmit-
ting and receiving aspects of the process, thus allowing four possible combinations of sent
and received signals; HH – horizontal sent/horizontal received, VV – vertical sent/vertical
received, HV – horizontal sent/vertical received and VH – vertical sent/horizontal
received (Campbell and Wynne 2012). With a quad polarisation sensor, all four combina-
tions are acquired.
One of the important derived values from radar is surface texture, the amount of
smoothness or roughness of a feature. For some features, texture by itself can be useful,
but often it is combined with the original radar data. There are many texture derivatives at
multiple window sizes that can be extracted from an image, potentially creating many
additional bands for analysis (Anderson 1998, Dekker 2003, Herold et al. 2003, 2004,
Lloyd et al. 2004, Amarsaikhan et al. 2007). These additional layers offer different sets of
information that can be used to improve discrimination between land cover/use features in
an image.
Various studies have compared different approaches for combining optical and radar
data for improving discrimination of the earth’s surface features. Pereira et al. (2013)
compared layer stacking and principal component fusion methods for separating different
agricultural land cover/use types in Brazil. Both Palsar and Landsat 5 TM data were used,
2 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
with better discrimination resulting from layer stacking. Similarly, in Waske and Van Der
Linden (2008), support vector machine and random forest methods were tested with
overall classification accuracy results similar for both methods. Alparone et al. (2004)
developed an intensity-modulated approach, while Eshan (2011) compared Hue Intensity
Saturation and Brovey transformand Le Hégarat-Mascle et al. (1998) applied a Dempster
Shafer approach. These studies and others show improved classification results in the
combined use of optical and radar data compared to the results of individual sensors.
Numerous other methods for multisensor fusion exist, with an excellent review of this
topic found in Luo et al. (2002), Pohl and Van Genderen (1998), Hall and Llinas (1997)
and Ehlers (1991). Many of these approaches applied to the fusion of optical and radar
data look at single areas where heterogeneity is less prevalent, for example, forest or
agricultural areas. This in comparison to the separation of feature types from multiple land
cover/use types. Furthermore, of those studies which have examined multiple land covers/
uses, few have used data gathered from multiple polarisations and from multiple wave-
lengths. Such studies are becoming increasingly important with increased availability of
these data types and with the expansion of human settlements.
The purpose of this research is to compare land cover/use classifications obtained
independently and in combinations of different radar wavelengths, polarisations and
derived texture measures. In addition, an optical image was included in this analysis.
This was an important component of this research since radar data, in comparison to
optical data, are usually captured within a much more limited set of bands (Shiraishi
et al. 2014). It is therefore expected that the combination of radar and radar-derived
texture measures and optical data will lead to improvements in the classification accuracy
of derived land cover/use. Furthermore, the Washington, DC, site used in this study
presents the opportunity to examine a complex landscape which continues to be influ-
enced by many cultures, both within the major city limits and the surrounding landscape.
In Section 2, a brief description of the study site and data used is given. Section 3 provides
the methodology for classifying and determining the accuracy of the results. Section 4
provides results for the various radar, radar-derived texture measures, optical data and
combinations of these for supporting land cover/use mapping, while Section 5 concludes
this paper.
2. Study site and data
The study area is Washington, DC, USA. Aster, Radarsat-2 and Palsar images were
acquired over the study area. The Aster image was collected on 11 March 2009, while
the Radarsat-2 and Palsar quad polarisation data were acquired on 17 July 2009 and
17 April 2007, respectively. These differences in acquisition dates do create some
concerns, but since the primary goal is relative comparison of different data combina-
tions, those concerns should be consistent for all classifications thus allowing valid
conclusions.
Radarsat-2 was launched December 2007 and is the first commercial radar sensor to
acquire C-band quad polarisation imagery. Radarsat-2 offers a wide range of spatial
resolutions that vary based on different beam modes of operation (Canadian Space
Agency 2008). A fine pixel resolution 8 m quad-polarisation image was obtained for
this study. The Palsar satellite was launched in January 2006. Palsar uses L-band radar
with quad polarisation and is supported by Japan Aerospace Exploration Agency (JAXA),
a Japanese Government organisation. The spatial resolution from Palsar was 12.5 m
(JAXA 2006). Aster, an optical instrument on board the Terra satellite, was first launched
International Journal of Image and Data Fusion 3
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
in December 1999 as a joint venture between the United States and Japan. This sensor
collects earth system data along 14 bands, but only three with the finest spatial resolutions
of 15 m were selected for this analysis; visible and near infrared nadir bands (0.52–
0.86 μm).
The Washington, DC, study area and several surrounding suburban/urban areas (white
and pink tones) are shown in Figure 1. The imagery also includes a significant portion of
forest (green and dark grey tones). Forests are mainly located outside of city limits,
separating most suburban areas and with greater fragmentation of this land cover/use
type in closer proximity to suburban and urban areas. In addition, the Potomac River
(black tones) provides the opportunity to classify water bodies. The Potomac River has a
length of approximately 644 km, with the deepest point at 107 feet. However, a navigable
channel depth of about 24 feet is maintained for most of the downstream portion of the
Washington, DC, area (USGS 1988). The vast majority of high backscatter areas (white
tones) in the radar image were the urban features in and around Washington, DC. As
shown in Figure 1, many urban centres sit alongside the banks of the Potomac in
downtown Washington, DC. Suburban residential areas (pink tones) were present across
much of the scene and demonstrated a mix of high and low radar returns, as would be
expected from a complex landscape of buildings, lawns, trees, roads, etc.
This study site is useful for determining whether different combinations of original
radar and texture measures can provide good urban classifications.
Figure 1. Radarsat-2 composite (HH, VV, HV and HV) image over Washington (approximate size
27 × 31 km).
4 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
The land cover/use classification types examined for Washington, DC, consisted of
urban, forest, suburban and water, as described by Anderson et al. (1976). The classes
used in this study were generalised and a limited number but for a comparison of methods
and data, they were considered sufficient. At a later research stage, based upon results
from this study, a more detailed definition of classes may be considered.
3. Methodology
Collected images from the various Radarsat-2, Palsar and Aster sensors over the study area
were first reduced to the lowest common boundary between all three products. Images were
then registered to a common geographic coordinate system, Universal Transverse Mercator
Zone 18 N with an earth model of World Geodetic System 1984 and pixels resampled to
10 m using the nearest neighbour algorithm to support uniform analysis of the data. In
addition, the radiometric resolution of all data was consistently set at 8 bits.
Training and truth areas of interest (AOI) polygons were then carefully selected to
prevent as much as possible cross-contamination of class pixels. These polygons were
determined by knowledge of the area, ground reconnaissance, from visual analysis of
the various remote sensing data and use of higher spatial resolution imagery, such as
from Google Earth. Figure 2 shows samples of each land cover/use type collected
from the Aster imagery. These are represented at different scales to assist in visual
Forested Suburban
Urban Water
Figure 2. Optical scenes of Washington, DC, classes from Aster imagery.
International Journal of Image and Data Fusion 5
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
differentiation of each type. Training AOIs were used to calibrate, or train the
classification algorithm and were exclusive to this use only, as was the use of the
truth AOIs. The training AOIs identified the spectral characteristics, signatures of each
of the four classes. The truth AOIs, at different locations than the training, were used
to determine the accuracy of the land cover/use classifications. A classification
accuracy of 85% suggested good class and overall thematic accuracy, as recommended
by Congalton and Green (1999) and Anderson et al. (1976). For both calibration and
validation, two to four AOIs were selected for each class with each AOI containing
about 1600 pixels on average. A maximum-likelihood (ML) decision rule was applied
to obtain the classifications. ML is a parametric classifier based on statistical theory. It
is one of the most widely used methods for land cover/use classification (Hansen
et al. 1996, Richards and Jia 2005), making this method an appropriate choice for use
in this research.
Images used throughout the classification process were derived from layer stacking
individual layers to create a single-band image. For example, for the Aster image, all three
visible and near infrared bands were layer stacked. A similar approach was used for
classifying radar (HH, VV, HV and VH bands) and radar-combined products. The next
section presents the results of the various classifications beginning with the independent
Aster and radar images and followed by the various value added, texture evaluations and
data combinations.
4. Results
4.1. Aster classification
Table 1 contains the results for the Aster analysis. The optical data provide an initial
classification against which the radar and radar fusion results can be compared. The
horizontal line near the bottom of each error matrix is the producer’s accuracy for each
class. The column on the right of each matrix presents the user’s accuracy for each class.
The single bolded number on the bottom right of each matrix is the overall thematic
accuracy. The optical land cover/use classification results are good for all classes, ranging
from 87% to 92% in producer’s accuracies and 84% to 100% in user’s accuracies. The
overall accuracy is a very good 90% for the three-band imagery and for a complex rural–
urban interface location. It is interesting that there is not more confusion between forest
and suburban or urban and suburban.
4.2. Radar analysis
Radar often has speckles, random pixels of high or low backscatter, which are, in
essence, errors as a function of the sensor operation. There is considerable and not
Table 1. Error matrix for Aster, Washington, DC.
Water Forest Suburban Urban
Water 4331 0 0 0 1.000
Forest 0 4511 363 18 0.922
Suburban 0 314 4030 402 0.849
Urban 638 68 193 4719 0.840
0.872 0.922 0.879 0.918 0.898
6 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
consistent literature relative to the need to remove, or at least reduce, the amount of
speckle (Lu et al. 1996, Bouchemakh et al. 2008, Maghsoudi et al. 2012). Using the
radar data for Washington, an analysis was made between the spectral signatures and
thematic classifications of original radar and despeckled radar at both 3 × 3 and 5 × 5
windows using the Lee-Sigma algorithm. The larger window size despeckled data had
higher overall thematic accuracies as would be expected, particularly given the use of
polygons for accuracy assessment, because the despeckling is basically a smoothing
filter. However, the differences were relatively small. The Radarsat-2 original accuracy
was 57%, increasing to 59% with the 5 × 5 filter, and Palsar increased from 72% to
77% upon despeckling. Based on these results, for this study, the radar image would
be despeckled for future classifications while derived texture values would be obtained
from the original radar data.
Table 2 contains the spectral signatures of two polarisations for the different land
cover/use classes for the despeckled Palsar image. Only the HH and HV bands are
shown in Table 2 as both HH and VV, and HV and VH results were very similar.
These signatures can provide information on how well the different classes are
statistically separated, giving insight into how well classifications might be. As
would be expected, the larger window sizes have lower standard deviations but are
minimally reduced. The class mean digital number (DN) values, especially for the HH
polarisation, are reasonably different, especially given the standard deviations.
However, other than the low DN values for water, the HV classes overlap in spectral
space. The spectral signatures for the despeckled Radarsat-2 classes (not shown)
produced a pattern similar to the despeckled Palsar, with the notable exception that
the Palsar data had overall lower standard deviation values for each land cover/use
class.
Table 3 contains the error matrix of the classification for the 5 × 5 window despeckled
Radarsat-2 and Palsar images. The Palsar overall accuracy is much higher than the
Table 2. Spectral signatures of Washington, despeckled Palsar imagery.
Palsar imagery 3 × 3 Window 5 × 5 Window
Land cover/use classes HH HV HH HV
Water "X 18.47 3.36 18.47 3.35
σ 2.02 0.53 1.66 0.5
Min. value 12 2 13 2
Max. value 25 4 23 4
Forest "X 25.93 14.22 25.91 14.22
σ 4.75 2.14 4.29 1.87
Min. value 15 8 17 9
Max. value 45 22 42 21
Suburban "X 33.81 14.32 33.85 14.26
σ 6.89 3.14 6.18 2.55
Min. value 16 7 17 8
Max. value 59 37 57 28
Urban "X 43.83 14.98 43.66 14.91
σ 9.94 3.20 8.76 2.75
Min. value 22 7 25 7
Max. value 96 28 86 25
International Journal of Image and Data Fusion 7
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
Radarsat-2 image. This could be a function of wavelength, spatial resolution, date, the
properties of the despeckled images, as previously mentioned, or a combination of these
factors. Nonetheless, the differences are considerable. Both sensors, as would be expected,
easily delineate water but the misclassification of water with suburban, as indicated by the
producer’s accuracy for Radarsat-2, is surprising. The confusion between forest and
suburban in both data sets is expected, but the Palsar urban delineations were better
than anticipated.
4.3. Texture analysis
Traditional digital image classification methodologies are based only upon the use of the
spectral characteristics of the data, thus ignoring any spatial information in the collected
data (Maillard 2003). Some landscape features, such as residential or urban areas, are
more easily distinguished by their spatial characteristics than spectral (Solberg and
Anil 1997, Nyoungui et al. 2002). Ignoring the full complement of data collected, spectral
and spatial, creates challenges for the accurate classification of some land cover/use
classes. The spatial arrangement of an image, to some degree, can be extracted as textural
information from the pixels and is particularly useful for radar (Kurosu et al. 1999, Chen
et al. 2004, Champion et al. 2008, Cervone and Haack 2012). Radar texture was therefore
an important component of this study, with most measures used today based on the work
of Haralick et al. (1973).
Based upon prior research, the variance measure of texture was selected for this study
(Haack and Bechdol 2000). Variance texture measures were extracted for four different
window sizes for each band of the original, not despeckled, Radarsat-2 and Palsar data.
The window sizes were 5 × 5, 9 × 9, 13 × 13 and 17 × 17. The best window sizes are a
function of the spatial resolution of the sensor and the specific landscape characteristics
(Villiger 2008). Classifications were obtained for each texture window size and their error
matrices contained in Tables 4 and 5. Equation (1) shows the method used for calculating
variance measures used in this study.
Table 3. Error matrices for Washington classification using despeckled 5 × 5 window.
Water Forest Suburban Urban
Radarsat 2
Water 4101 1 2 3 0.999
Forest 0 2365 1661 777 0.492
Suburban 598 2145 2507 1785 0.356
Urban 270 382 416 2574 0.707
0.825 0.483 0.547 0.501 0.590
Palsar
Water 4962 0 0 0 1.000
Forest 0 3605 931 77 0.781
Suburban 0 1228 2250 843 0.521
Urban 7 60 1405 4219 0.741
0.999 0.737 0.491 0.821 0.768
8 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
Variance ¼
Æ Xij À "X
À Á2
n À 1
(1)
where Xij = DN value of pixel (i, j)
n = number of pixels in window
"X = mean of moving window.
Table 4 shows that texture measures for the Radarsat-2 image produced much higher
accuracies than the original data (59%). However, in the case of the Palsar image, none of
the texture measures were able to generate a land cover/use classification accuracy that
was as high as the classification results for the original image (77%). The observed
differences in classification results were unexpectedly high for smaller window sizes.
These differences are likely a result of how the different radar bands interact with the
landscape features.
There is a pattern, with only one minor exception, in both the Radarsat-2 and Palsar
results. As the window size gets larger, the overall accuracy of the land cover/use
classification improves. In the Radarsat-2 texture measures, the overall accuracy improves
from 65% at a window size of 5 × 5 to 71% with a window of 13 × 13. There is a slight
decline at the largest window size of 17 × 17. For the Palsar image, the results are similar.
The overall accuracy for the texture measures increased from 64% with a window of 5 × 5
to 75% for a window of 17 × 17.
Table 4. Washington error matrices of Radarsat-2 variance texture.
Water Forest Suburban Urban
5 × 5 Window
Water 4808 8 0 18 0.995
Forest 44 4003 2951 958 0.503
Suburban 97 715 1046 1268 0.335
Urban 20 167 589 2895 0.789
0.968 0.818 0.228 0.563 0.651
9 × 9 Window
Water 4590 0 0 0 1.000
Forest 0 4038 2637 252 0.583
Suburban 379 682 1245 1063 0.370
Urban 0 173 704 3824 0.813
0.924 0.825 0.271 0.744 0.699
13 × 13 Window
Water 4215 0 0 0 1.000
Forest 0 3882 2300 94 0.619
Suburban 754 876 1396 673 0.377
Urban 0 135 890 4372 0.810
0.848 0.793 0.304 0.851 0.708
17 × 17 Window
Water 3397 0 0 0 1.000
Forest 0 3685 2112 4 0.635
Suburban 1572 1080 1478 428 0.324
Urban 0 128 996 4707 0.807
0.684 0.753 0.322 0.916 0.677
International Journal of Image and Data Fusion 9
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
The Radarsat-2 texture measure producer’s accuracy show interesting fluctuations
in the forest, suburban and urban classes when compared to the values obtained with
the despeckled 5 × 5 original image. This is understandable as texture is very
different from backscatter. The forest and urban producer’s accuracy increased but
that of the suburban decreased with texture. The user’s accuracy values were more
consistent.
There was an anomaly in one class’s accuracy in the Radarsat-2 texture results.
The water producer’s accuracy decreased significantly between the window size of
13 × 13 and that of a window size of 17 × 17, from 85% to 68%. The reason for the
decrease in water accuracy can be found by visually analysing the original Radarsat-2
imagery. Intense urban and suburban features that surround portions of the Potomac
River have been ‘ghosted’ or reflected onto the river compounded by the larger
window size for texture, which will therefore include more land-based pixels and a
less unique signature.
The overall classification result of the Palsar despeckled 5 × 5 image was 77%. The
classification result from the texture measure generated from the Palsar original image
with a window size of 17 × 17 is 75%, a decrease of 2%. The classification performed
with the despeckled 5 × 5 image does slightly worse in the producer’s accuracy for the
water, suburban and urban classes. Conversely, the texture measure classification does
slightly better in the producer’s accuracy for the forest class. These differences, however,
are minimal.
Table 5. Washington error matrices of Palsar variance texture.
Water Forest Suburban Urban
5 × 5 Window
Water 4760 108 41 27 0.964
Forest 175 3948 2392 1038 0.523
Suburban 7 613 1142 1427 0.358
Urban 27 224 1011 2647 0.677
0.958 0.807 0.249 0.515 0.638
9 × 9 Window
Water 4784 2 0 0 1.000
Forest 168 3917 1978 480 0.599
Suburban 0 801 1490 1362 0.408
Urban 17 173 1118 3297 0.716
0.963 0.801 0.325 0.642 0.689
13 × 13 Window
Water 4830 0 0 0 1.000
Forest 139 3800 1664 228 0.652
Suburban 0 929 1624 1283 0.423
Urban 0 164 1298 3628 0.713
0.972 0.777 0.354 0.706 0.709
17 × 17 Window
Water 4884 0 0 0 1.000
Forest 85 3711 1489 109 0.688
Suburban 0 936 2159 1016 0.525
Urban 0 246 938 4014 0.772
0.983 0.758 0.471 0.781 0.754
10 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
4.4. Combining despeckled radar with texture
The 5 × 5 despeckled original radar images were integrated with the best of the texture
measures for each sensor and then classified, that is, the 13 × 13 and 17 × 17 variance
measures for the despeckled Radarsat-2 and Palsar data, respectively. Table 6 contains the
results of these combinations for both radar sensors.
The Radarsat-2 combination provided an overall accuracy of 71%, an improvement
when compared to the despeckled-only radar image classification of 59%, an increase of
12%. The overall classification accuracy of Palsar original despeckled image combined
with the best texture measures image improved slightly when compared to the classifica-
tion of the single Palsar original despeckled image alone, from 77% to 78%.
The producer’s accuracy of the water class was high, 84% and 100%, in the two radar
wavelengths. By adding texture measures to the original imagery, the forest and urban
classes were able to perform much better in both the producer’s and user’s accuracies,
when compared to the original Radarsat-2 imagery. The texture measures in these classes,
when combined with the original image, greatly enhance the classification results. The
results in the Palsar image were lower, as the overall classification increased only by 1%.
The producer’s accuracy in the urban class did increase by 7%, but that of the forest class
actually decreased by a nominal 1%. Palsar continues to provide better results than
Radarsat-2.
4.5. Combining multiple wavelength radar images
The recent increase in types of spaceborne radar allowed this analysis to include classify-
ing radar images from two different portions of the electromagnetic spectrum. The Palsar
sensor collects data in the L-band, while the Radarsat-2 collects data in the C-band. Both
of the images used in this analysis were despeckled with a 5 × 5 window.
The combined Washington Palsar and Radarsat-2 images had a slight increase in
overall accuracy to 78% (Table 7), when compared to the 77% overall accuracy result that
was achieved when classifying the despeckled Palsar image alone. However, there are
some interesting class differences with less range in producer’s and user’s accuracies. For
the combined radar, the producer’s accuracies varied from 71% to 94% while in the
original Palsar, the range was from 49% to 100%. The suburban class for original Palsar
Table 6. Error matrices of Washington original despeckled imagery combined with the best
derived texture measure.
Water Forest Suburban Urban
Water 4177 0 0 0 1.000
Forest 3 3535 1859 52 0.649
Suburban 787 1203 1867 718 0.408
Urban 2 155 860 4369 0.811
0.841 0.722 0.407 0.850 0.712
Palsar Original and Texture 17 × 17
Water 4959 0 0 0 1.000
Forest 0 3551 1048 4 0.771
Suburban 0 1226 2209 539 0.556
Urban 10 116 1329 4596 0.760
0.998 0.726 0.482 0.894 0.782
International Journal of Image and Data Fusion 11
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
increased from 49% to 71% in the fused radar. These reduced class-by-class variations
support the integration of multi-wavelength radar for land cover/use mapping.
4.6. Combining optical and radar images
The data acquired for this study provide an opportunity to integrate the radar and texture
measures with the Aster multispectral image. The fusion of different radar wavelengths
with optical imagery in land cover/use classification is a relatively new area of research
(Santos and Messina 2008, Amarsaikhan et al. 2012). This may be in part due to the much
lower accessibility of radar data compared to optical data available to scientists and
researchers alike.
The Washington Aster image only land cover/use classification overall accuracy was
90%. Whereas the best classification accuracy for a radar data set was achieved through
the use of the Palsar, 77%. The addition of the Palsar imagery to the Aster increased the
overall accuracy to 93% (Table 8). The Radarsat-2 texture measure with a window size of
13 × 13 produced the best overall accuracy result for the Washington imagery texture
Table 7. Washington multi-wavelength error matrices combining Radarsat 2 and Palsar.
Water Forest Suburban Urban
Water 4650 0 0 0 1.000
Forest 0 3269 699 75 0.809
Suburban 0 1596 3252 1007 0.555
Urban 319 28 635 4057 0.805
0.936 0.668 0.709 0.789 0.777
Table 8. Error matrices of Washington multispectral optical, radar and texture combinations.
Water Forest Suburban Urban
Water 4913 0 0 0 1.000
Forest 0 4442 184 4 0.959
Suburban 0 432 4142 427 0.828
Urban 56 19 260 4708 0.934
0.989 0.908 0.903 0.916 0.929
Aster and Radarsat-2 Texture 13 × 13
Water 4805 0 0 0 1.000
Forest 0 3887 209 0 0.949
Suburban 16 931 4162 465 0.747
Urban 148 75 215 4674 0.914
0.967 0.794 0.908 0.910 0.895
Aster and Palsar and Radarsat-2 Texture 17 × 17
Water 4966 0 0 0 1.000
Forest 0 3793 155 0 0.961
Suburban 0 1034 3912 359 0.737
Urban 3 66 519 4780 0.890
0.999 0.775 0.853 0.930 0.891
12 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
measures (71%). This layer was then combined with the Aster data, which yielded an
overall accuracy of 90%.
Finally, the best texture measure, which again, was the Radarsat-2 with a window size
of 13 × 13 and the best of the original radar, which was the Palsar image, were layer
stacked with the Aster data. These combined data layers were analysed and the land
cover/use classification was generated to produce an overall accuracy of 89%.
For the Washington, DC, location and its land cover/use classes, the combination of
the radar or derived radar texture measures did not improve overall accuracies much over
the original Aster. Given that the Aster independently had a classification of 90%, there
was little opportunity for improvement. There are, however, some specific class improve-
ments with the sensor fusion, such as the producer’s accuracy for water increasing from
87% to 100% in Aster with sensor integration.
5. Summary
Land cover/use information represents an important resource for tracking humans’ impact
on the earth’s surface. Without adequate land cover/use information, decision-makers
often fail to make reliable decisions concerning the sustainable planning and management
of land resources. This in turn can have disabling effects, both medium and long term, on
countries’ self-sustainability.
The most common method of collecting land cover/use data is the use of optical
sensors on board aerial and spaceborne platforms. These methods, although largely
successful, continue to be impacted by cloud cover, especially low tropical and high-
latitude locations, presenting a challenge for continuous observation and monitoring of
land resources. Radar, still a relatively new area of research to land cover/use mapping
(Hoekman et al. 2010), has the potential to overcome these challenges. The electromag-
netic waves of radar are almost not influenced by atmospheric interference and provide
all-weather land observation data. As these data become increasingly available, it is
expected that there will be an increased need for studies examining the suitability of
radar, both as a surrogate and as a complementary source of optical data, for land cover/
use mapping in different parts of the world.
In this study, the potential of using radar for supporting land cover/use mapping was
examined. Of the two radar sensors evaluated, the original Palsar data produced much
better classification results when compared to Radarsat-2. Texture, a common tool used
widely in land cover research, was also evaluated. Results showed that derived radar
texture values were variable in their ability to improve classifications. The Radarsat-2
texture measures resulted in better classifications than the despeckled original image by
12%. Further analysis showed that overall, the Palsar C-band did not perform as well as
the Radarsat-2L-band when generating classifications while using a texture measure.
These results are consistent with the findings of Li et al. (2012), comparing L-band and
C-band radar over Brazil, a humid tropical area, relative to the Washington, DC, location
examined in this research. Also interesting and consistent with the literature, the best
classification accuracy improvements were seen in the urban class using Radarsat-2
imagery texture. Urban spaces, known for their difficulty in mapping because of their
complex mix of human-transformed properties, can therefore benefit from the use of radar
to support land cover/use mapping of this class. This is especially important given the
accelerated growth of many such areas over the last 50 years.
The combination of radar and radar-derived texture measures was also explored. The
classification results of the combined original radar and texture images showed varied
International Journal of Image and Data Fusion 13
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
increases when compared to the overall accuracy of the despeckled-only radar image
classifications. There was virtually no improvement for Palsar but a 7% increase from the
best Radarsat-2 texture when the original was combined with this measure. When the
radar images from two different portions of the electromagnetic spectrum were combined,
this resulted in no improvement over the use of independent Palsar. However, the initial
Palsar results were quite good.
Finally, this study in part reinforced the value of optical imagery. The results for the
classifications using the independent Aster imagery were excellent. Even when optical
imagery is available, radar imagery can help improve the classification results. When the
radar imagery was added to the Aster optical image, the overall accuracy improved, but
marginally, from 90% to 93%. However, the water producer class accuracies were higher
than the optical alone and the urban equal to the optical. For the Washington, DC, data,
independent radar sensor land cover/use classification accuracies do not compete with that
of optical imagery. However, the overall accuracy of radar results of 78% would be very
useful in those regions of the world where cloud cover or other factors limit the avail-
ability of optical acquisition.
Several limitations were also identified during the course of this research, which form
part of improvements for future research. First, only few generic land cover/ use classes
were examined. Although results were generally good for the combination of optical and
radar data, both overall and for individual classes, the classes selected may not be
appropriate for other areas of study which may have different definitions for these classes.
Also, these classes may not be appropriate for better understanding the overall influencers
of land cover/use change taking place on the ground. Investigation of more detailed
classes is therefore needed, which may lead to results different from those obtained in
this study. Second, only one classification method was investigated, the ML decision rule.
Other methods of classification, such as support vector machines and random forest,
should also be investigated and results compared for determining the most suitable
method. Third, this study utilised only one measure for texture. Additional measures
should be examined and compared to produce more conclusive results as to the most
suitable texture measure for use. Finally, several studies have already investigated the use
of multidate radar as a possible source for improving classification results (Le Hegarat-
Mascle et al. 2000, Shao et al. 2001, Chust et al. 2004). Further examination of these
types of data, both as a single data source and as a complementary source to optical data,
should also be investigated.
Acknowledgements
The authors would like to thank the following organisations for providing the imagery used in this
research. Radarsat-2 images were provided by the Canadian Space Agency under project 3126 of the
Science and Operational Application Research for RADARSAT-2 programme. The Alaska Space
Facility, under sponsorship from the NASA, provided the PALSAR imagery. Finally, the NASA
Land Processes Distributed Active Archive Center at the USGS/Earth Resources Observation and
Science (EROS) Center provided the ASTER imagery.
Disclosure statement
No potential conflict of interest was reported by the authors.
14 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
Funding
Additional support was provided through grants received by the Department of Geography and
Geoinformation Science at George Mason University.
References
Alparone, L., et al., 2004. Landsat ETM+ and SAR image fusion based on generalized intensity
modulation. IEEE Transactions on Geoscience and Remote Sensing, 42 (12), 2832–2839.
doi:10.1109/TGRS.2004.838344
Al-Tahir, R., Richardson, T., and Mahabir, R., 2009. Advancing the use of earth observation systems
for the assessment of sustainable development. Association of Professional Engineers of
Trinidad Tobago, 38, 6–15.
Al-Tahir, R., Saeed, I., and Mahabir, R., 2014. Application of remote sensing and GIS technologies
in flood risk management. In: D.D. Chadee, J.M. Sutherland, and J.B. Agard, eds. Flooding and
climate change: sectorial impacts and adaptation strategies for the Caribbean region.
Hauppauge, NY: Nova, 137–150.
Amarsaikhan, D., et al., 2007. The integrated use of optical and InSAR data for urban land-cover
mapping. International Journal of Remote Sensing, 28 (6), 1161–1171. doi:10.1080/
01431160600784267
Amarsaikhan, D., et al., 2012. Comparison of multisource image fusion methods and land cover
classification. International Journal of Remote Sensing, 33 (8), 2532–2550. doi:10.1080/
01431161.2011.616552
Anderson, C., 1998. Texture measures in SIR-C images. IEEE International on Geoscience and
Remote Sensing Symposium Proceedings, 1998. IGARSS ‘98, 3, 1717–1719. doi:10.1109/
IGARSS.1998.692452
Anderson, J.R., et al., 1976. A land use and land cover classification system for use with remote
sensor data. US Geological Survey Professional Paper, No. 964. Washington, DC, p. 28.
Asner, G., 2001. Cloud cover in Landsat observations of the Brazilian Amazon. International
Journal of Remote Sensing, 22 (18), 3855–3862. doi:10.1080/01431160010006926
Bouchemakh, L., et al., 2008. A comparative study of speckle filtering in polarimetric RADAR
SAR images. In: 3rd international conference on information and communication technologies:
from theory to applications, ICTTA 2008, 1–6. doi:10.1109/ICTTA.2008.4530040
Campbell, J. and Wynne, R., 2012. Introduction to remote sensing. 5th ed. New York, NY: Guilford
Press, 626.
Canadian Space Agency, 2008. Radarsat – 1 [online]. Available from: http://www.space.gc.ca/asc/
eng/satellites/radarsat1/default.asp [Accessed 2008].
Cervone, G. and Haack, B., 2012. Supervised machine learning of fused RADAR and optical data for land
cover classification. Journal of Applied Remote Sensing, 6 (1), 063597. doi:10.1117/1.JRS.6.063597
Champion, I., et al., 2008. RADAR image texture as a function of forest stand age. International
Journal of Remote Sensing, 29 (6), 1795–1800. doi:10.1080/01431160701730128
Chen, D., Stow, D., and Gong, P., 2004. Examining the effect of spatial resolution and texture
window size on classification accuracy: an urban environment case. International Journal of
Remote Sensing, 25 (11), 2177–2192. doi:10.1080/01431160310001618464
Chust, G., Ducrot, D., and Pretus, J.L., 2004. Land cover discrimination potential of radar multi-
temporal series and optical multispectral images in a Mediterranean cultural landscape.
International Journal of Remote Sensing, 25 (17), 3513–3528. doi:10.1080/
0143116032000160480
Congalton, R. and Green, K., 1999. Assessing the accuracy of remotely sensed data: principles and
practices. 1st ed. Boca Raton, FL: CRC Press, 137.
Dekker, R.J., 2003. Texture analysis and classification of ERS SAR images for map updating of
urban areas in The Netherlands. IEEE Transactions on Geoscience and Remote Sensing, 41 (9),
1950–1958. doi:10.1109/TGRS.2003.814628
Dell’Acqua, F., Gamba, P., and Lisini, G., 2003. Improvements to urban area characterization using
multitemporal and multiangle SAR images. IEEE Transactions on Geoscience and Remote
Sensing, 41 (9), 1996–2004. doi:10.1109/TGRS.2003.814631
Ehlers, M., 1991. Multisensor image fusion techniques in remote sensing. ISPRS Journal of
Photogrammetry and Remote Sensing, 46 (1), 19–30. doi:10.1016/0924-2716(91)90003-E
International Journal of Image and Data Fusion 15
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
Eshan, G., 2011. Use of radar remote sensing for land use dynamic monitoring in South West Coast
of Caspian sea. International Journal of Geomatics and Geosciences, 2 (2), 472–480.
Haack, B. and Bechdol, M., 2000. Integrating multisensor data and RADAR texture measures for
land cover mapping. Computers & Geosciences, 26 (4), 411–421. doi:10.1016/S0098-3004(99)
00121-1
Haack, B., Mahabir, R., and Kerkering, J., 2014. Remote sensing-derived national land cover land
use maps: a comparison for Malawi. Geocarto International, 1–23. doi:10.1080/
10106049.2014.952355
Hall, D.L. and Llinas, J., 1997. An introduction to multisensor data fusion. Proceedings of the IEEE,
85 (1), 6–23. doi:10.1109/5.554205
Hansen, M., Dubayah, R., and DeFries, R., 1996. Classification trees: an alternative to traditional
land cover classifiers. International Journal of Remote Sensing, 17 (5), 1075–1081.
doi:10.1080/01431169608949069
Haralick, R., Shanmugam, K., and Dinstein, I., 1973. Textural features for image classification.
IEEE Transactions on Systems, Man, and Cybernetics, SMC-3 (6), 610–621. doi:10.1109/
TSMC.1973.4309314
Henderson, F., et al., 2002. Evaluation of SAR-optical imagery synthesis techniques in a complex
coastal ecosystem. Photogrammetric Engineering and Remote Sensing, 68 (8), 839–846.
Herold, M., Liu, X., and Clarke, K., 2003. Spatial metrics and image texture for mapping urban land
use. Photogrammetric Engineering and Remote Sensing, 69 (9), 991–1001. doi:10.14358/
PERS.69.9.991
Herold, N., Haack, B., and Solomon, E., 2004. An evaluation of RADAR texture for land use/cover
extraction in varied landscapes. International Journal of Applied Earth Observation and
Geoinformation, 5 (2), 113–128. doi:10.1016/j.jag.2004.01.005
Hoekman, D., Vissers, M., and Wielaard, N., 2010. PALSAR wide-area mapping of Borneo:
methodology and map validation. IEEE Journal of Selected Topics in Applied Earth
Observations and Remote Sensing, 3 (4), 605–617. doi:10.1109/JSTARS.2010.2070059
JAXA, 2006. Image data acquired by the Palsar onboard the “Daichi”. Japanese Aerospace
Exploration Agency [online]. Available from: http://www.jaxa.jp/press/2006/02/20060217_dai-
chi_e.html [Accessed February 2008].
Kurosu, T., et al., 1999. Texture statistics for classification of land use with multitemporal JERS-1
SAR single look imagery. IEEE Transactions on Geoscience and Remote Sensing, 37 (1), 227–
235. doi:10.1109/36.739157
Le Hégarat-Mascle, S., Bloch, I., and Vidal-Madjar, D., 1998. Introduction of neighborhood
information in evidence theory and application to data fusion of radar and optical images
with partial cloud cover. Pattern Recognition, 31 (11), 1811–1823. doi:10.1016/
S0031-3203(98)00051-X
Le Hegarat-Mascle, S., et al., 2000. Land cover discrimination from multitemporal ERS images and
multispectral Landsat images: a study case in an agricultural area in France. International
Journal of Remote Sensing, 21 (3), 435–456. doi:10.1080/014311600210678
Li, G., et al., 2012. A comparative analysis of ALOS PALSAR L-band and RADARSAT-2 C-band
data for land-cover classification in a tropical moist region. ISPRS Journal of Photogrammetry
and Remote Sensing, 70, 26–38. doi:10.1016/j.isprsjprs.2012.03.010
Lloyd, C., et al., 2004. A comparison of texture measures for the per-field classification of
Mediterranean land cover. International Journal of Remote Sensing, 25 (19), 3943–3965.
doi:10.1080/0143116042000192321
Lu, Y., et al., 1996. Adaptive filtering algorithms for SAR speckle reduction. IEEE Geoscience and
Remote Sensing Symposium, Proceedings, IGARSS 1996, 1, 67–69.
Luo, R.C., Yih, C., and Su, K.L., 2002. Multisensor fusion and integration: approaches, applica-
tions, and future research directions. IEEE Sensors Journal, 2 (2), 107–119. doi:10.1109/
JSEN.2002.1000251
Maghsoudi, Y., Collins, M., and Leckie, D., 2012. Speckle reduction for the forest mapping analysis
of multi-temporal Radarsat-1 images. International Journal of Remote Sensing, 33 (5), 1349–
1359. doi:10.1080/01431161.2011.568530
Mahabir, R. and Al-Tahir, R. 2008. The role of spatial data infrastructure in the management of land
degradation in small tropical Caribbean Islands [online]. In: Tenth International Conference for
Spatial Data Infrastructure, 25–29 February, St. Augustine, Trinidad, 25–29. Available from:
http://www.gsdi.org/gsdiconf/gsdi10/prog_details.html [Accessed 15 January 2015].
16 T. Idol et al.
Downloadedby[GeorgeMasonUniversity]at11:5723March2015
Maillard, P., 2003. Comparing texture analysis methods through classification. Photogrammetric
Engineering and Remote Sensing, 69 (4), 357–367. doi:10.14358/PERS.69.4.357
Nyoungui, A., Tonye, E., and Akono, A., 2002. Evaluation of speckle filtering and texture analysis
methods for land cover classification from SAR images. International Journal of Remote
Sensing, 23 (9), 1895–1925. doi:10.1080/01431160110036157
Pereira, L.P., et al., 2013. Optical and radar data integration for land use and land cover mapping in
the Brazilian Amazon. GIScience & Remote Sensing, 50 (3), 301–321. doi:10.1080/
15481603.2013.805589
Pohl, C. and Van Genderen, J.L., 1998. Review article multisensor image fusion in remote sensing:
concepts, methods and applications. International Journal of Remote Sensing, 19 (5), 823–854.
doi:10.1080/014311698215748
Richards, J.A. and Jia, X., 2005. Remote sensing and digital image analysis. 5th ed. Berlin:
Springer, 194–199.
Santos, C. and Messina, J., 2008. Multi-sensor data fusion for modeling African Palm in the
Ecuadorian Amazon. Photogrammetric Engineering and Remote Sensing, 74 (6), 711–723.
doi:10.14358/PERS.74.6.711
Sawaya, S., et al., 2010. Land use/cover mapping with quad-polarization RADAR and derived
texture measures near Wad Madani, Sudan. GIScience & Remote Sensing, 47 (3), 398–411.
doi:10.2747/1548-1603.47.3.398
Shao, Y., et al., 2001. Rice monitoring and production estimation using multitemporal RADARSAT.
Remote Sensing of Environment, 76 (3), 310–325. doi:10.1016/S0034-4257(00)00212-1
Sheoran, A. and Haack, B., 2013. Classification of California agriculture using quad polarization
radar data and Landsat Thematic Mapper data. GIScience and Remote Sensing, 50 (1), 50–63.
doi:10.1080/15481603.2013.778555
Shiraishi, T., et al., 2014. Comparative assessment of supervised classifiers for land use–land cover
classification in a tropical region using time-series PALSAR mosaic data. IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensing, 7 (4), 1186–1199.
doi:10.1109/JSTARS.2014.2313572
Solberg, A. and Anil, K., 1997. Texture fusion and feature selection applied to SAR imagery. IEEE
Transactions on Geosciences and Remote Sensing, 10 (6), 989–1003. doi:10.1109/36.563288
Töyrä, J., Pietroniro, A., and Martz, L., 2001. Multisensor hydrologic assessment of a freshwater
wetland. Remote Sensing of Environment, 75, 162–173. doi:10.1016/S0034-4257(00)00164-4
USGS, 1988. River basins of the United States: the Potomac. Denver, CO: US Geological Survey, 9.
Villiger, E., 2008. Radar and multispectral image fusion options for improved land cover classifica-
tion. PhD dissertation. George Mason University.
Waske, B. and Van Der Linden, S., 2008. Classifying multilevel imagery from SAR and optical
sensors by decision fusion. IEEE Transactions on Geoscience and Remote Sensing, 46 (5),
1457–1466. doi:10.1109/TGRS.2008.916089
International Journal of Image and Data Fusion 17
Downloadedby[GeorgeMasonUniversity]at11:5723March2015

Weitere ähnliche Inhalte

Was ist angesagt?

IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping
IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping
IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping IRJET Journal
 
A comparison of classification techniques for seismic facies recognition
A comparison of classification techniques for seismic facies recognitionA comparison of classification techniques for seismic facies recognition
A comparison of classification techniques for seismic facies recognitionPioneer Natural Resources
 
2021_Article_.pdf
2021_Article_.pdf2021_Article_.pdf
2021_Article_.pdfgautam3392
 
The second data_release_of_the_iphas
The second data_release_of_the_iphasThe second data_release_of_the_iphas
The second data_release_of_the_iphasSérgio Sacani
 
5A_ 2_Developing a statistical methodology to improve classification and mapp...
5A_ 2_Developing a statistical methodology to improve classification and mapp...5A_ 2_Developing a statistical methodology to improve classification and mapp...
5A_ 2_Developing a statistical methodology to improve classification and mapp...GISRUK conference
 
study and analysis of hy si data in 400 to 500
study and analysis of hy si data in 400 to 500study and analysis of hy si data in 400 to 500
study and analysis of hy si data in 400 to 500IJAEMSJORNAL
 
IGARSS2011-Villa.ppt
IGARSS2011-Villa.pptIGARSS2011-Villa.ppt
IGARSS2011-Villa.pptgrssieee
 
IGARSS2011-Villa.ppt
IGARSS2011-Villa.pptIGARSS2011-Villa.ppt
IGARSS2011-Villa.pptgrssieee
 
IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...
IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...
IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...IRJET Journal
 
Ph.d. research proposal
Ph.d. research proposalPh.d. research proposal
Ph.d. research proposalMiah Shahjahan
 
Inter-university Upper atmosphere Global Observation NETwork
Inter-university Upper atmosphere  Global Observation NETworkInter-university Upper atmosphere  Global Observation NETwork
Inter-university Upper atmosphere Global Observation NETworkIugo Net
 
PDF_9_29_2015_complete_Arc_crest_2014_AK_final
PDF_9_29_2015_complete_Arc_crest_2014_AK_finalPDF_9_29_2015_complete_Arc_crest_2014_AK_final
PDF_9_29_2015_complete_Arc_crest_2014_AK_finalAparna Ganguly
 
Cancerous lung nodule detection in computed tomography images
Cancerous lung nodule detection in computed tomography imagesCancerous lung nodule detection in computed tomography images
Cancerous lung nodule detection in computed tomography imagesTELKOMNIKA JOURNAL
 
Geologic Data Models
Geologic Data ModelsGeologic Data Models
Geologic Data ModelsAndrew Zolnai
 
A population of_fast_radio_bursts_ar_cosmological_distances
A population of_fast_radio_bursts_ar_cosmological_distancesA population of_fast_radio_bursts_ar_cosmological_distances
A population of_fast_radio_bursts_ar_cosmological_distancesSérgio Sacani
 
DRI and UAS Applications Research
DRI and UAS Applications ResearchDRI and UAS Applications Research
DRI and UAS Applications ResearchDRIscience
 
A novel CAD system to automatically detect cancerous lung nodules using wav...
  A novel CAD system to automatically detect cancerous lung nodules using wav...  A novel CAD system to automatically detect cancerous lung nodules using wav...
A novel CAD system to automatically detect cancerous lung nodules using wav...IJECEIAES
 

Was ist angesagt? (20)

IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping
IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping
IRJET-Evaluation of the Back Propagation Neural Network for Gravity Mapping
 
A comparison of classification techniques for seismic facies recognition
A comparison of classification techniques for seismic facies recognitionA comparison of classification techniques for seismic facies recognition
A comparison of classification techniques for seismic facies recognition
 
2021_Article_.pdf
2021_Article_.pdf2021_Article_.pdf
2021_Article_.pdf
 
The second data_release_of_the_iphas
The second data_release_of_the_iphasThe second data_release_of_the_iphas
The second data_release_of_the_iphas
 
5A_ 2_Developing a statistical methodology to improve classification and mapp...
5A_ 2_Developing a statistical methodology to improve classification and mapp...5A_ 2_Developing a statistical methodology to improve classification and mapp...
5A_ 2_Developing a statistical methodology to improve classification and mapp...
 
FUTURE TRENDS OF SEISMIC ANALYSIS
FUTURE TRENDS OF SEISMIC ANALYSISFUTURE TRENDS OF SEISMIC ANALYSIS
FUTURE TRENDS OF SEISMIC ANALYSIS
 
1 s2.0-s0305440306001701-main
1 s2.0-s0305440306001701-main1 s2.0-s0305440306001701-main
1 s2.0-s0305440306001701-main
 
study and analysis of hy si data in 400 to 500
study and analysis of hy si data in 400 to 500study and analysis of hy si data in 400 to 500
study and analysis of hy si data in 400 to 500
 
IGARSS2011-Villa.ppt
IGARSS2011-Villa.pptIGARSS2011-Villa.ppt
IGARSS2011-Villa.ppt
 
IGARSS2011-Villa.ppt
IGARSS2011-Villa.pptIGARSS2011-Villa.ppt
IGARSS2011-Villa.ppt
 
IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...
IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...
IRJET- Simulation Measurement for Detection of the Breast Tumors by using Ult...
 
Ph.d. research proposal
Ph.d. research proposalPh.d. research proposal
Ph.d. research proposal
 
Inter-university Upper atmosphere Global Observation NETwork
Inter-university Upper atmosphere  Global Observation NETworkInter-university Upper atmosphere  Global Observation NETwork
Inter-university Upper atmosphere Global Observation NETwork
 
PDF_9_29_2015_complete_Arc_crest_2014_AK_final
PDF_9_29_2015_complete_Arc_crest_2014_AK_finalPDF_9_29_2015_complete_Arc_crest_2014_AK_final
PDF_9_29_2015_complete_Arc_crest_2014_AK_final
 
Cancerous lung nodule detection in computed tomography images
Cancerous lung nodule detection in computed tomography imagesCancerous lung nodule detection in computed tomography images
Cancerous lung nodule detection in computed tomography images
 
K0372057065
K0372057065K0372057065
K0372057065
 
Geologic Data Models
Geologic Data ModelsGeologic Data Models
Geologic Data Models
 
A population of_fast_radio_bursts_ar_cosmological_distances
A population of_fast_radio_bursts_ar_cosmological_distancesA population of_fast_radio_bursts_ar_cosmological_distances
A population of_fast_radio_bursts_ar_cosmological_distances
 
DRI and UAS Applications Research
DRI and UAS Applications ResearchDRI and UAS Applications Research
DRI and UAS Applications Research
 
A novel CAD system to automatically detect cancerous lung nodules using wav...
  A novel CAD system to automatically detect cancerous lung nodules using wav...  A novel CAD system to automatically detect cancerous lung nodules using wav...
A novel CAD system to automatically detect cancerous lung nodules using wav...
 

Ähnlich wie Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA

Comparison and integration of spaceborne optical and radar data for mapping i...
Comparison and integration of spaceborne optical and radar data for mapping i...Comparison and integration of spaceborne optical and radar data for mapping i...
Comparison and integration of spaceborne optical and radar data for mapping i...rsmahabir
 
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...rsmahabir
 
RemoteSensingProjectPaper
RemoteSensingProjectPaperRemoteSensingProjectPaper
RemoteSensingProjectPaperJames Sherwood
 
Change detection process and techniques
Change detection process and techniquesChange detection process and techniques
Change detection process and techniquesAlexander Decker
 
Heritage hetherington lidar_pdf[1]
Heritage hetherington lidar_pdf[1]Heritage hetherington lidar_pdf[1]
Heritage hetherington lidar_pdf[1]llica
 
10.1080@01431161.2011.630331.pdf
10.1080@01431161.2011.630331.pdf10.1080@01431161.2011.630331.pdf
10.1080@01431161.2011.630331.pdfDanielPatio50
 
Isprsarchives xl-7-w3-897-2015
Isprsarchives xl-7-w3-897-2015Isprsarchives xl-7-w3-897-2015
Isprsarchives xl-7-w3-897-2015bayrmgl
 
Remotesensing 11-00164
Remotesensing 11-00164Remotesensing 11-00164
Remotesensing 11-00164Erdenetsogt Su
 
ASSESSMENT OF VEGETATION COVER USING UAV-BASED REMOTE SENSING
ASSESSMENT OF VEGETATION COVER  USING UAV-BASED REMOTE SENSINGASSESSMENT OF VEGETATION COVER  USING UAV-BASED REMOTE SENSING
ASSESSMENT OF VEGETATION COVER USING UAV-BASED REMOTE SENSINGPritam Kumar Barman
 
2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...
2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...
2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...Rudolf Husar
 
Performance analysis of change detection techniques for land use land cover
Performance analysis of change detection techniques for land  use land coverPerformance analysis of change detection techniques for land  use land cover
Performance analysis of change detection techniques for land use land coverIJECEIAES
 
[International agrophysics] ground penetrating radar for underground sensing ...
[International agrophysics] ground penetrating radar for underground sensing ...[International agrophysics] ground penetrating radar for underground sensing ...
[International agrophysics] ground penetrating radar for underground sensing ...Minal Ghugal
 
Remote sensing-derived national land cover land use maps: a comparison for Ma...
Remote sensing-derived national land cover land use maps: a comparison for Ma...Remote sensing-derived national land cover land use maps: a comparison for Ma...
Remote sensing-derived national land cover land use maps: a comparison for Ma...rsmahabir
 
Use of UAS for Hydrological Monitoring
Use of UAS for Hydrological MonitoringUse of UAS for Hydrological Monitoring
Use of UAS for Hydrological MonitoringSalvatore Manfreda
 
Landuse landcover and ndvi analysis for halia catchment
Landuse landcover and ndvi analysis for halia catchmentLanduse landcover and ndvi analysis for halia catchment
Landuse landcover and ndvi analysis for halia catchmentIAEME Publication
 
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Pioneer Natural Resources
 
huber_etal_RSE_2014
huber_etal_RSE_2014huber_etal_RSE_2014
huber_etal_RSE_2014Silvia Huber
 
Flood Detection Using Empirical Bayesian Networks
Flood Detection Using Empirical Bayesian NetworksFlood Detection Using Empirical Bayesian Networks
Flood Detection Using Empirical Bayesian NetworksIOSRJECE
 

Ähnlich wie Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA (20)

Comparison and integration of spaceborne optical and radar data for mapping i...
Comparison and integration of spaceborne optical and radar data for mapping i...Comparison and integration of spaceborne optical and radar data for mapping i...
Comparison and integration of spaceborne optical and radar data for mapping i...
 
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...
Separability Analysis of Integrated Spaceborne Radar and Optical Data: Sudan ...
 
RemoteSensingProjectPaper
RemoteSensingProjectPaperRemoteSensingProjectPaper
RemoteSensingProjectPaper
 
Change detection process and techniques
Change detection process and techniquesChange detection process and techniques
Change detection process and techniques
 
Heritage hetherington lidar_pdf[1]
Heritage hetherington lidar_pdf[1]Heritage hetherington lidar_pdf[1]
Heritage hetherington lidar_pdf[1]
 
10.1080@01431161.2011.630331.pdf
10.1080@01431161.2011.630331.pdf10.1080@01431161.2011.630331.pdf
10.1080@01431161.2011.630331.pdf
 
Isprsarchives xl-7-w3-897-2015
Isprsarchives xl-7-w3-897-2015Isprsarchives xl-7-w3-897-2015
Isprsarchives xl-7-w3-897-2015
 
Buckles_research
Buckles_researchBuckles_research
Buckles_research
 
Remotesensing 11-00164
Remotesensing 11-00164Remotesensing 11-00164
Remotesensing 11-00164
 
ASSESSMENT OF VEGETATION COVER USING UAV-BASED REMOTE SENSING
ASSESSMENT OF VEGETATION COVER  USING UAV-BASED REMOTE SENSINGASSESSMENT OF VEGETATION COVER  USING UAV-BASED REMOTE SENSING
ASSESSMENT OF VEGETATION COVER USING UAV-BASED REMOTE SENSING
 
2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...
2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...
2003-12-02 Environmental Information Systems for Monitoring, Assessment, and ...
 
Performance analysis of change detection techniques for land use land cover
Performance analysis of change detection techniques for land  use land coverPerformance analysis of change detection techniques for land  use land cover
Performance analysis of change detection techniques for land use land cover
 
[International agrophysics] ground penetrating radar for underground sensing ...
[International agrophysics] ground penetrating radar for underground sensing ...[International agrophysics] ground penetrating radar for underground sensing ...
[International agrophysics] ground penetrating radar for underground sensing ...
 
Remote sensing-derived national land cover land use maps: a comparison for Ma...
Remote sensing-derived national land cover land use maps: a comparison for Ma...Remote sensing-derived national land cover land use maps: a comparison for Ma...
Remote sensing-derived national land cover land use maps: a comparison for Ma...
 
Use of UAS for Hydrological Monitoring
Use of UAS for Hydrological MonitoringUse of UAS for Hydrological Monitoring
Use of UAS for Hydrological Monitoring
 
Landuse landcover and ndvi analysis for halia catchment
Landuse landcover and ndvi analysis for halia catchmentLanduse landcover and ndvi analysis for halia catchment
Landuse landcover and ndvi analysis for halia catchment
 
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...
Directional Analysis and Filtering for Dust Storm detection in NOAA-AVHRR Ima...
 
huber_etal_RSE_2014
huber_etal_RSE_2014huber_etal_RSE_2014
huber_etal_RSE_2014
 
Term paper.pptx
Term paper.pptxTerm paper.pptx
Term paper.pptx
 
Flood Detection Using Empirical Bayesian Networks
Flood Detection Using Empirical Bayesian NetworksFlood Detection Using Empirical Bayesian Networks
Flood Detection Using Empirical Bayesian Networks
 

Mehr von rsmahabir

A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...
A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...
A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...rsmahabir
 
Impact of road networks on the distribution of dengue fever cases in Trinidad...
Impact of road networks on the distribution of dengue fever cases in Trinidad...Impact of road networks on the distribution of dengue fever cases in Trinidad...
Impact of road networks on the distribution of dengue fever cases in Trinidad...rsmahabir
 
The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...
The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...
The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...rsmahabir
 
The Role of Spatial Data Infrastructure in the Management of Land Degradation...
The Role of Spatial Data Infrastructure in the Management of Land Degradation...The Role of Spatial Data Infrastructure in the Management of Land Degradation...
The Role of Spatial Data Infrastructure in the Management of Land Degradation...rsmahabir
 
Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...
Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...
Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...rsmahabir
 
Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)
Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)
Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)rsmahabir
 
APPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENT
APPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENTAPPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENT
APPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENTrsmahabir
 
Healthy Food Accessibility and Obesity: Case Study of Pennsylvania, USA
Healthy Food Accessibility and Obesity: Case Study of Pennsylvania, USAHealthy Food Accessibility and Obesity: Case Study of Pennsylvania, USA
Healthy Food Accessibility and Obesity: Case Study of Pennsylvania, USArsmahabir
 
Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...
Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...
Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...rsmahabir
 
VDIS: A System for Morphological Detection and Identification of Vehicles in ...
VDIS: A System for Morphological Detection and Identification of Vehicles in ...VDIS: A System for Morphological Detection and Identification of Vehicles in ...
VDIS: A System for Morphological Detection and Identification of Vehicles in ...rsmahabir
 
Climate Change and Forest Management: Adaptation of Geospatial Technologies
Climate Change and Forest Management: Adaptation of Geospatial TechnologiesClimate Change and Forest Management: Adaptation of Geospatial Technologies
Climate Change and Forest Management: Adaptation of Geospatial Technologiesrsmahabir
 
Black holes no more the emergence of volunteer geographic information
Black holes no more  the emergence of volunteer geographic informationBlack holes no more  the emergence of volunteer geographic information
Black holes no more the emergence of volunteer geographic informationrsmahabir
 
Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...
Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...
Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...rsmahabir
 
Authoritative and Volunteered Geographical Information in a Developing Countr...
Authoritative and Volunteered Geographical Information in a Developing Countr...Authoritative and Volunteered Geographical Information in a Developing Countr...
Authoritative and Volunteered Geographical Information in a Developing Countr...rsmahabir
 

Mehr von rsmahabir (14)

A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...
A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...
A Critical Review of High and Very High-Resolution Remote Sensing Approaches ...
 
Impact of road networks on the distribution of dengue fever cases in Trinidad...
Impact of road networks on the distribution of dengue fever cases in Trinidad...Impact of road networks on the distribution of dengue fever cases in Trinidad...
Impact of road networks on the distribution of dengue fever cases in Trinidad...
 
The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...
The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...
The Rabies Epidemic in Trinidad of 1923 to 1937: An Evaluation with a Geograp...
 
The Role of Spatial Data Infrastructure in the Management of Land Degradation...
The Role of Spatial Data Infrastructure in the Management of Land Degradation...The Role of Spatial Data Infrastructure in the Management of Land Degradation...
The Role of Spatial Data Infrastructure in the Management of Land Degradation...
 
Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...
Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...
Advancing the Use of Earth Observation Systems for the Assessment of Sustaina...
 
Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)
Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)
Dengue Fever Epidemiology and Control in the Caribbean: A Status Report (2012)
 
APPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENT
APPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENTAPPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENT
APPLICATIONS OF REMOTE SENSING AND GIS TECHNOLOGIES IN FLOOD RISK MANAGEMENT
 
Healthy Food Accessibility and Obesity: Case Study of Pennsylvania, USA
Healthy Food Accessibility and Obesity: Case Study of Pennsylvania, USAHealthy Food Accessibility and Obesity: Case Study of Pennsylvania, USA
Healthy Food Accessibility and Obesity: Case Study of Pennsylvania, USA
 
Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...
Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...
Exploratory space-time analysis of dengue incidence in trinidad: a retrospect...
 
VDIS: A System for Morphological Detection and Identification of Vehicles in ...
VDIS: A System for Morphological Detection and Identification of Vehicles in ...VDIS: A System for Morphological Detection and Identification of Vehicles in ...
VDIS: A System for Morphological Detection and Identification of Vehicles in ...
 
Climate Change and Forest Management: Adaptation of Geospatial Technologies
Climate Change and Forest Management: Adaptation of Geospatial TechnologiesClimate Change and Forest Management: Adaptation of Geospatial Technologies
Climate Change and Forest Management: Adaptation of Geospatial Technologies
 
Black holes no more the emergence of volunteer geographic information
Black holes no more  the emergence of volunteer geographic informationBlack holes no more  the emergence of volunteer geographic information
Black holes no more the emergence of volunteer geographic information
 
Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...
Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...
Coral Reefs: Challenges, Opportunities and Evolutionary Strategies for Surviv...
 
Authoritative and Volunteered Geographical Information in a Developing Countr...
Authoritative and Volunteered Geographical Information in a Developing Countr...Authoritative and Volunteered Geographical Information in a Developing Countr...
Authoritative and Volunteered Geographical Information in a Developing Countr...
 

Kürzlich hochgeladen

LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxmalonesandreagweneth
 
BUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdf
BUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdfBUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdf
BUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdfWildaNurAmalia2
 
REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...
REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...
REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...Universidade Federal de Sergipe - UFS
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubaikojalkojal131
 
Four Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.pptFour Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.pptJoemSTuliba
 
Functional group interconversions(oxidation reduction)
Functional group interconversions(oxidation reduction)Functional group interconversions(oxidation reduction)
Functional group interconversions(oxidation reduction)itwameryclare
 
The dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxThe dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxEran Akiva Sinbar
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...D. B. S. College Kanpur
 
Microteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringMicroteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringPrajakta Shinde
 
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.PraveenaKalaiselvan1
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naJASISJULIANOELYNV
 
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxGenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxBerniceCayabyab1
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRlizamodels9
 
Pests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdfPests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdfPirithiRaju
 
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...lizamodels9
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxNandakishor Bhaurao Deshmukh
 
Bioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptxBioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptx023NiWayanAnggiSriWa
 
Topic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptxTopic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptxJorenAcuavera1
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxMurugaveni B
 

Kürzlich hochgeladen (20)

LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
 
BUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdf
BUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdfBUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdf
BUMI DAN ANTARIKSA PROJEK IPAS SMK KELAS X.pdf
 
REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...
REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...
REVISTA DE BIOLOGIA E CIÊNCIAS DA TERRA ISSN 1519-5228 - Artigo_Bioterra_V24_...
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
 
Four Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.pptFour Spheres of the Earth Presentation.ppt
Four Spheres of the Earth Presentation.ppt
 
Functional group interconversions(oxidation reduction)
Functional group interconversions(oxidation reduction)Functional group interconversions(oxidation reduction)
Functional group interconversions(oxidation reduction)
 
The dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptxThe dark energy paradox leads to a new structure of spacetime.pptx
The dark energy paradox leads to a new structure of spacetime.pptx
 
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
Fertilization: Sperm and the egg—collectively called the gametes—fuse togethe...
 
Microteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringMicroteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical Engineering
 
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by na
 
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxGenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
 
Pests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdfPests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdf
 
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
Best Call Girls In Sector 29 Gurgaon❤️8860477959 EscorTs Service In 24/7 Delh...
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
 
Bioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptxBioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptx
 
Topic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptxTopic 9- General Principles of International Law.pptx
Topic 9- General Principles of International Law.pptx
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
 

Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA

  • 1. This article was downloaded by: [George Mason University] On: 23 March 2015, At: 11:57 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Click for updates International Journal of Image and Data Fusion Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tidf20 Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA Terry Idol a , Barry Haack a & Ron Mahabir a a Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA, USA Published online: 20 Mar 2015. To cite this article: Terry Idol, Barry Haack & Ron Mahabir (2015): Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA, International Journal of Image and Data Fusion To link to this article: http://dx.doi.org/10.1080/19479832.2015.1017541 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
  • 2. Conditions of access and use can be found at http://www.tandfonline.com/page/terms- and-conditions Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 3. Radar and optical remote sensing data evaluation and fusion; a case study for Washington, DC, USA Terry Idol, Barry Haack and Ron Mahabir* Department of Geography and Geoinformation Science, George Mason University, Fairfax, VA, USA (Received 17 December 2014; accepted 3 February 2015) The recent increase in the availability of spaceborne radar in different wavelengths with multiple polarisations provides new opportunities for land surface analysis. This research effort explored how different radar data, and derived texture values, indepen- dently and in combination with optical imagery influence land cover/use classification accuracies for a study site in Washington, DC, USA. Two spaceborne radar images, Radarsat-2L-band and Palsar C-band quad-polarised radar, were registered with Aster optical data for this study. Traditional methods of classification were applied to various components and combinations of this data set, and overall and class-specific thematic accuracies obtained for comparison. The results for the two despeckled radar data sets were quite different, with Radarsat-2 obtaining an overall accuracy of 59% and Palsar 77%, while that of the optical Aster was 90%. Combining the original radar and a variance texture measure increased the accuracy of Radarsat-2 to 71% but that of Palsar only to 78%. One of the sensor fusions of optical and radar obtained an accuracy of 93%. For this location, radar by itself does not obtain classification accuracies as high as optical data, but fusion with optical imagery provides better overall thematic accuracy than the optical independently, and results in some useful improvements on a class-by-class basis. For those regions with high cloud cover, quad polarisation radar can independently provide viable results but it may be wavelength-dependent. Keywords: Radarsat-2; Palsar; Aster; quad polarisation; sensor fusion; variance texture; Washington, DC 1. Introduction Reliable and up-to-date land cover/use information, both current and changes over time, represent a fundamental resource for tracking human modification of the earth’s terrestrial surface. This information is required for making sound decisions on a broad range of economic and environmental planning and resource management issues, including plan- ning infrastructure, reducing pollution and expanding food production. Although conven- tional ground survey methods for collecting land cover/use information still continue to be used on a much smaller scale, they suffer from several limitations, such as being labour- intensive and time-consuming with data collected relatively infrequently (Al-Tahir et al. 2009). Compared to conventional methods, remote sensing offers several advantages for collecting land cover/use data, including synoptic view, relatively low cost and the ability to capture changes on the ground in shorter and in a more temporally consistent manner (Haack et al. 2014). With increasing pressure placed on ecosystems due to accelerated population growth, urbanisation, migration and economic growth, it is *Corresponding author. Email: rmahabir@gmu.edu International Journal of Image and Data Fusion, 2015 http://dx.doi.org/10.1080/19479832.2015.1017541 © 2015 Taylor & Francis Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 4. expected that remote sensing will continue to be relied on as a sustainable source of information on land cover/use. There has been a tremendous increase in the number of spaceborne remote sensors over the last years. These systems have provided data with a broad range of spatial, spectral, temporal and radiometric resolutions. With these expansions in data types, there has been a much wider set of applications for these data and improvements in derived land cover/use maps and statistical information. For many years, this technology has been based on optical, typically multispectral, systems such as Landsat and the French Satellite Pour I’Observation de la Terre (SPOT). More recently, active microwave or radar has become more available. These radar systems have some significant advantages over optical systems in their ability to penetrate cloud cover and have night-time capabilities (Al-Tahir et al. 2014). This provides the opportunity to collect data in areas, such as low- latitude tropical regions and high latitudes, where it is difficult to obtain data via other sensors (Henderson et al. 2002, Li et al. 2012). In the Amazon forest of South America, for example, the likelihood of capturing optical image scenes with cloud cover rates of 30% or less can be as low as 0% per year (Asner 2001). Mahabir and Al-Tahir (2008) also report similar issues for the Caribbean region using the island of Trinidad as a case study. In such locations, radar imagery has tremendous potential for both updating and monitor- ing land cover/use changes. Spaceborne radar has recently improved greatly from the single wavelength and single polarisation, in essence one band, which earlier systems provided. Those systems were very limited in the amount of surface information that could be extracted (Töyrä et al. 2001, Dell’Acqua et al. 2003).Newer systems, such as the Japanese Phased Array type L-band Synthetic Aperture Radar (PALSAR), the Canadian RADARSAT-2 and the European TerraSar-X and Sentinel sensors, collect information from multiple polarisa- tions, allowing for much more complex processing and analysis and potentially more useful spatial information (Sawaya et al. 2010, Sheoran and Haack 2013). In addition, individual radar sensors may function in different microwave portions of the spectrum, providing opportunities for comparison and integration. Polarisation, the orientation of the beam relative to the earth’s surface either vertically or horizontally, is important to remote sensing scientists as each type of polarisation provides a different type of information. Polarisation can be altered for both the transmit- ting and receiving aspects of the process, thus allowing four possible combinations of sent and received signals; HH – horizontal sent/horizontal received, VV – vertical sent/vertical received, HV – horizontal sent/vertical received and VH – vertical sent/horizontal received (Campbell and Wynne 2012). With a quad polarisation sensor, all four combina- tions are acquired. One of the important derived values from radar is surface texture, the amount of smoothness or roughness of a feature. For some features, texture by itself can be useful, but often it is combined with the original radar data. There are many texture derivatives at multiple window sizes that can be extracted from an image, potentially creating many additional bands for analysis (Anderson 1998, Dekker 2003, Herold et al. 2003, 2004, Lloyd et al. 2004, Amarsaikhan et al. 2007). These additional layers offer different sets of information that can be used to improve discrimination between land cover/use features in an image. Various studies have compared different approaches for combining optical and radar data for improving discrimination of the earth’s surface features. Pereira et al. (2013) compared layer stacking and principal component fusion methods for separating different agricultural land cover/use types in Brazil. Both Palsar and Landsat 5 TM data were used, 2 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 5. with better discrimination resulting from layer stacking. Similarly, in Waske and Van Der Linden (2008), support vector machine and random forest methods were tested with overall classification accuracy results similar for both methods. Alparone et al. (2004) developed an intensity-modulated approach, while Eshan (2011) compared Hue Intensity Saturation and Brovey transformand Le Hégarat-Mascle et al. (1998) applied a Dempster Shafer approach. These studies and others show improved classification results in the combined use of optical and radar data compared to the results of individual sensors. Numerous other methods for multisensor fusion exist, with an excellent review of this topic found in Luo et al. (2002), Pohl and Van Genderen (1998), Hall and Llinas (1997) and Ehlers (1991). Many of these approaches applied to the fusion of optical and radar data look at single areas where heterogeneity is less prevalent, for example, forest or agricultural areas. This in comparison to the separation of feature types from multiple land cover/use types. Furthermore, of those studies which have examined multiple land covers/ uses, few have used data gathered from multiple polarisations and from multiple wave- lengths. Such studies are becoming increasingly important with increased availability of these data types and with the expansion of human settlements. The purpose of this research is to compare land cover/use classifications obtained independently and in combinations of different radar wavelengths, polarisations and derived texture measures. In addition, an optical image was included in this analysis. This was an important component of this research since radar data, in comparison to optical data, are usually captured within a much more limited set of bands (Shiraishi et al. 2014). It is therefore expected that the combination of radar and radar-derived texture measures and optical data will lead to improvements in the classification accuracy of derived land cover/use. Furthermore, the Washington, DC, site used in this study presents the opportunity to examine a complex landscape which continues to be influ- enced by many cultures, both within the major city limits and the surrounding landscape. In Section 2, a brief description of the study site and data used is given. Section 3 provides the methodology for classifying and determining the accuracy of the results. Section 4 provides results for the various radar, radar-derived texture measures, optical data and combinations of these for supporting land cover/use mapping, while Section 5 concludes this paper. 2. Study site and data The study area is Washington, DC, USA. Aster, Radarsat-2 and Palsar images were acquired over the study area. The Aster image was collected on 11 March 2009, while the Radarsat-2 and Palsar quad polarisation data were acquired on 17 July 2009 and 17 April 2007, respectively. These differences in acquisition dates do create some concerns, but since the primary goal is relative comparison of different data combina- tions, those concerns should be consistent for all classifications thus allowing valid conclusions. Radarsat-2 was launched December 2007 and is the first commercial radar sensor to acquire C-band quad polarisation imagery. Radarsat-2 offers a wide range of spatial resolutions that vary based on different beam modes of operation (Canadian Space Agency 2008). A fine pixel resolution 8 m quad-polarisation image was obtained for this study. The Palsar satellite was launched in January 2006. Palsar uses L-band radar with quad polarisation and is supported by Japan Aerospace Exploration Agency (JAXA), a Japanese Government organisation. The spatial resolution from Palsar was 12.5 m (JAXA 2006). Aster, an optical instrument on board the Terra satellite, was first launched International Journal of Image and Data Fusion 3 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 6. in December 1999 as a joint venture between the United States and Japan. This sensor collects earth system data along 14 bands, but only three with the finest spatial resolutions of 15 m were selected for this analysis; visible and near infrared nadir bands (0.52– 0.86 μm). The Washington, DC, study area and several surrounding suburban/urban areas (white and pink tones) are shown in Figure 1. The imagery also includes a significant portion of forest (green and dark grey tones). Forests are mainly located outside of city limits, separating most suburban areas and with greater fragmentation of this land cover/use type in closer proximity to suburban and urban areas. In addition, the Potomac River (black tones) provides the opportunity to classify water bodies. The Potomac River has a length of approximately 644 km, with the deepest point at 107 feet. However, a navigable channel depth of about 24 feet is maintained for most of the downstream portion of the Washington, DC, area (USGS 1988). The vast majority of high backscatter areas (white tones) in the radar image were the urban features in and around Washington, DC. As shown in Figure 1, many urban centres sit alongside the banks of the Potomac in downtown Washington, DC. Suburban residential areas (pink tones) were present across much of the scene and demonstrated a mix of high and low radar returns, as would be expected from a complex landscape of buildings, lawns, trees, roads, etc. This study site is useful for determining whether different combinations of original radar and texture measures can provide good urban classifications. Figure 1. Radarsat-2 composite (HH, VV, HV and HV) image over Washington (approximate size 27 × 31 km). 4 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 7. The land cover/use classification types examined for Washington, DC, consisted of urban, forest, suburban and water, as described by Anderson et al. (1976). The classes used in this study were generalised and a limited number but for a comparison of methods and data, they were considered sufficient. At a later research stage, based upon results from this study, a more detailed definition of classes may be considered. 3. Methodology Collected images from the various Radarsat-2, Palsar and Aster sensors over the study area were first reduced to the lowest common boundary between all three products. Images were then registered to a common geographic coordinate system, Universal Transverse Mercator Zone 18 N with an earth model of World Geodetic System 1984 and pixels resampled to 10 m using the nearest neighbour algorithm to support uniform analysis of the data. In addition, the radiometric resolution of all data was consistently set at 8 bits. Training and truth areas of interest (AOI) polygons were then carefully selected to prevent as much as possible cross-contamination of class pixels. These polygons were determined by knowledge of the area, ground reconnaissance, from visual analysis of the various remote sensing data and use of higher spatial resolution imagery, such as from Google Earth. Figure 2 shows samples of each land cover/use type collected from the Aster imagery. These are represented at different scales to assist in visual Forested Suburban Urban Water Figure 2. Optical scenes of Washington, DC, classes from Aster imagery. International Journal of Image and Data Fusion 5 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 8. differentiation of each type. Training AOIs were used to calibrate, or train the classification algorithm and were exclusive to this use only, as was the use of the truth AOIs. The training AOIs identified the spectral characteristics, signatures of each of the four classes. The truth AOIs, at different locations than the training, were used to determine the accuracy of the land cover/use classifications. A classification accuracy of 85% suggested good class and overall thematic accuracy, as recommended by Congalton and Green (1999) and Anderson et al. (1976). For both calibration and validation, two to four AOIs were selected for each class with each AOI containing about 1600 pixels on average. A maximum-likelihood (ML) decision rule was applied to obtain the classifications. ML is a parametric classifier based on statistical theory. It is one of the most widely used methods for land cover/use classification (Hansen et al. 1996, Richards and Jia 2005), making this method an appropriate choice for use in this research. Images used throughout the classification process were derived from layer stacking individual layers to create a single-band image. For example, for the Aster image, all three visible and near infrared bands were layer stacked. A similar approach was used for classifying radar (HH, VV, HV and VH bands) and radar-combined products. The next section presents the results of the various classifications beginning with the independent Aster and radar images and followed by the various value added, texture evaluations and data combinations. 4. Results 4.1. Aster classification Table 1 contains the results for the Aster analysis. The optical data provide an initial classification against which the radar and radar fusion results can be compared. The horizontal line near the bottom of each error matrix is the producer’s accuracy for each class. The column on the right of each matrix presents the user’s accuracy for each class. The single bolded number on the bottom right of each matrix is the overall thematic accuracy. The optical land cover/use classification results are good for all classes, ranging from 87% to 92% in producer’s accuracies and 84% to 100% in user’s accuracies. The overall accuracy is a very good 90% for the three-band imagery and for a complex rural– urban interface location. It is interesting that there is not more confusion between forest and suburban or urban and suburban. 4.2. Radar analysis Radar often has speckles, random pixels of high or low backscatter, which are, in essence, errors as a function of the sensor operation. There is considerable and not Table 1. Error matrix for Aster, Washington, DC. Water Forest Suburban Urban Water 4331 0 0 0 1.000 Forest 0 4511 363 18 0.922 Suburban 0 314 4030 402 0.849 Urban 638 68 193 4719 0.840 0.872 0.922 0.879 0.918 0.898 6 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 9. consistent literature relative to the need to remove, or at least reduce, the amount of speckle (Lu et al. 1996, Bouchemakh et al. 2008, Maghsoudi et al. 2012). Using the radar data for Washington, an analysis was made between the spectral signatures and thematic classifications of original radar and despeckled radar at both 3 × 3 and 5 × 5 windows using the Lee-Sigma algorithm. The larger window size despeckled data had higher overall thematic accuracies as would be expected, particularly given the use of polygons for accuracy assessment, because the despeckling is basically a smoothing filter. However, the differences were relatively small. The Radarsat-2 original accuracy was 57%, increasing to 59% with the 5 × 5 filter, and Palsar increased from 72% to 77% upon despeckling. Based on these results, for this study, the radar image would be despeckled for future classifications while derived texture values would be obtained from the original radar data. Table 2 contains the spectral signatures of two polarisations for the different land cover/use classes for the despeckled Palsar image. Only the HH and HV bands are shown in Table 2 as both HH and VV, and HV and VH results were very similar. These signatures can provide information on how well the different classes are statistically separated, giving insight into how well classifications might be. As would be expected, the larger window sizes have lower standard deviations but are minimally reduced. The class mean digital number (DN) values, especially for the HH polarisation, are reasonably different, especially given the standard deviations. However, other than the low DN values for water, the HV classes overlap in spectral space. The spectral signatures for the despeckled Radarsat-2 classes (not shown) produced a pattern similar to the despeckled Palsar, with the notable exception that the Palsar data had overall lower standard deviation values for each land cover/use class. Table 3 contains the error matrix of the classification for the 5 × 5 window despeckled Radarsat-2 and Palsar images. The Palsar overall accuracy is much higher than the Table 2. Spectral signatures of Washington, despeckled Palsar imagery. Palsar imagery 3 × 3 Window 5 × 5 Window Land cover/use classes HH HV HH HV Water "X 18.47 3.36 18.47 3.35 σ 2.02 0.53 1.66 0.5 Min. value 12 2 13 2 Max. value 25 4 23 4 Forest "X 25.93 14.22 25.91 14.22 σ 4.75 2.14 4.29 1.87 Min. value 15 8 17 9 Max. value 45 22 42 21 Suburban "X 33.81 14.32 33.85 14.26 σ 6.89 3.14 6.18 2.55 Min. value 16 7 17 8 Max. value 59 37 57 28 Urban "X 43.83 14.98 43.66 14.91 σ 9.94 3.20 8.76 2.75 Min. value 22 7 25 7 Max. value 96 28 86 25 International Journal of Image and Data Fusion 7 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 10. Radarsat-2 image. This could be a function of wavelength, spatial resolution, date, the properties of the despeckled images, as previously mentioned, or a combination of these factors. Nonetheless, the differences are considerable. Both sensors, as would be expected, easily delineate water but the misclassification of water with suburban, as indicated by the producer’s accuracy for Radarsat-2, is surprising. The confusion between forest and suburban in both data sets is expected, but the Palsar urban delineations were better than anticipated. 4.3. Texture analysis Traditional digital image classification methodologies are based only upon the use of the spectral characteristics of the data, thus ignoring any spatial information in the collected data (Maillard 2003). Some landscape features, such as residential or urban areas, are more easily distinguished by their spatial characteristics than spectral (Solberg and Anil 1997, Nyoungui et al. 2002). Ignoring the full complement of data collected, spectral and spatial, creates challenges for the accurate classification of some land cover/use classes. The spatial arrangement of an image, to some degree, can be extracted as textural information from the pixels and is particularly useful for radar (Kurosu et al. 1999, Chen et al. 2004, Champion et al. 2008, Cervone and Haack 2012). Radar texture was therefore an important component of this study, with most measures used today based on the work of Haralick et al. (1973). Based upon prior research, the variance measure of texture was selected for this study (Haack and Bechdol 2000). Variance texture measures were extracted for four different window sizes for each band of the original, not despeckled, Radarsat-2 and Palsar data. The window sizes were 5 × 5, 9 × 9, 13 × 13 and 17 × 17. The best window sizes are a function of the spatial resolution of the sensor and the specific landscape characteristics (Villiger 2008). Classifications were obtained for each texture window size and their error matrices contained in Tables 4 and 5. Equation (1) shows the method used for calculating variance measures used in this study. Table 3. Error matrices for Washington classification using despeckled 5 × 5 window. Water Forest Suburban Urban Radarsat 2 Water 4101 1 2 3 0.999 Forest 0 2365 1661 777 0.492 Suburban 598 2145 2507 1785 0.356 Urban 270 382 416 2574 0.707 0.825 0.483 0.547 0.501 0.590 Palsar Water 4962 0 0 0 1.000 Forest 0 3605 931 77 0.781 Suburban 0 1228 2250 843 0.521 Urban 7 60 1405 4219 0.741 0.999 0.737 0.491 0.821 0.768 8 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 11. Variance ¼ Æ Xij À "X À Á2 n À 1 (1) where Xij = DN value of pixel (i, j) n = number of pixels in window "X = mean of moving window. Table 4 shows that texture measures for the Radarsat-2 image produced much higher accuracies than the original data (59%). However, in the case of the Palsar image, none of the texture measures were able to generate a land cover/use classification accuracy that was as high as the classification results for the original image (77%). The observed differences in classification results were unexpectedly high for smaller window sizes. These differences are likely a result of how the different radar bands interact with the landscape features. There is a pattern, with only one minor exception, in both the Radarsat-2 and Palsar results. As the window size gets larger, the overall accuracy of the land cover/use classification improves. In the Radarsat-2 texture measures, the overall accuracy improves from 65% at a window size of 5 × 5 to 71% with a window of 13 × 13. There is a slight decline at the largest window size of 17 × 17. For the Palsar image, the results are similar. The overall accuracy for the texture measures increased from 64% with a window of 5 × 5 to 75% for a window of 17 × 17. Table 4. Washington error matrices of Radarsat-2 variance texture. Water Forest Suburban Urban 5 × 5 Window Water 4808 8 0 18 0.995 Forest 44 4003 2951 958 0.503 Suburban 97 715 1046 1268 0.335 Urban 20 167 589 2895 0.789 0.968 0.818 0.228 0.563 0.651 9 × 9 Window Water 4590 0 0 0 1.000 Forest 0 4038 2637 252 0.583 Suburban 379 682 1245 1063 0.370 Urban 0 173 704 3824 0.813 0.924 0.825 0.271 0.744 0.699 13 × 13 Window Water 4215 0 0 0 1.000 Forest 0 3882 2300 94 0.619 Suburban 754 876 1396 673 0.377 Urban 0 135 890 4372 0.810 0.848 0.793 0.304 0.851 0.708 17 × 17 Window Water 3397 0 0 0 1.000 Forest 0 3685 2112 4 0.635 Suburban 1572 1080 1478 428 0.324 Urban 0 128 996 4707 0.807 0.684 0.753 0.322 0.916 0.677 International Journal of Image and Data Fusion 9 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 12. The Radarsat-2 texture measure producer’s accuracy show interesting fluctuations in the forest, suburban and urban classes when compared to the values obtained with the despeckled 5 × 5 original image. This is understandable as texture is very different from backscatter. The forest and urban producer’s accuracy increased but that of the suburban decreased with texture. The user’s accuracy values were more consistent. There was an anomaly in one class’s accuracy in the Radarsat-2 texture results. The water producer’s accuracy decreased significantly between the window size of 13 × 13 and that of a window size of 17 × 17, from 85% to 68%. The reason for the decrease in water accuracy can be found by visually analysing the original Radarsat-2 imagery. Intense urban and suburban features that surround portions of the Potomac River have been ‘ghosted’ or reflected onto the river compounded by the larger window size for texture, which will therefore include more land-based pixels and a less unique signature. The overall classification result of the Palsar despeckled 5 × 5 image was 77%. The classification result from the texture measure generated from the Palsar original image with a window size of 17 × 17 is 75%, a decrease of 2%. The classification performed with the despeckled 5 × 5 image does slightly worse in the producer’s accuracy for the water, suburban and urban classes. Conversely, the texture measure classification does slightly better in the producer’s accuracy for the forest class. These differences, however, are minimal. Table 5. Washington error matrices of Palsar variance texture. Water Forest Suburban Urban 5 × 5 Window Water 4760 108 41 27 0.964 Forest 175 3948 2392 1038 0.523 Suburban 7 613 1142 1427 0.358 Urban 27 224 1011 2647 0.677 0.958 0.807 0.249 0.515 0.638 9 × 9 Window Water 4784 2 0 0 1.000 Forest 168 3917 1978 480 0.599 Suburban 0 801 1490 1362 0.408 Urban 17 173 1118 3297 0.716 0.963 0.801 0.325 0.642 0.689 13 × 13 Window Water 4830 0 0 0 1.000 Forest 139 3800 1664 228 0.652 Suburban 0 929 1624 1283 0.423 Urban 0 164 1298 3628 0.713 0.972 0.777 0.354 0.706 0.709 17 × 17 Window Water 4884 0 0 0 1.000 Forest 85 3711 1489 109 0.688 Suburban 0 936 2159 1016 0.525 Urban 0 246 938 4014 0.772 0.983 0.758 0.471 0.781 0.754 10 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 13. 4.4. Combining despeckled radar with texture The 5 × 5 despeckled original radar images were integrated with the best of the texture measures for each sensor and then classified, that is, the 13 × 13 and 17 × 17 variance measures for the despeckled Radarsat-2 and Palsar data, respectively. Table 6 contains the results of these combinations for both radar sensors. The Radarsat-2 combination provided an overall accuracy of 71%, an improvement when compared to the despeckled-only radar image classification of 59%, an increase of 12%. The overall classification accuracy of Palsar original despeckled image combined with the best texture measures image improved slightly when compared to the classifica- tion of the single Palsar original despeckled image alone, from 77% to 78%. The producer’s accuracy of the water class was high, 84% and 100%, in the two radar wavelengths. By adding texture measures to the original imagery, the forest and urban classes were able to perform much better in both the producer’s and user’s accuracies, when compared to the original Radarsat-2 imagery. The texture measures in these classes, when combined with the original image, greatly enhance the classification results. The results in the Palsar image were lower, as the overall classification increased only by 1%. The producer’s accuracy in the urban class did increase by 7%, but that of the forest class actually decreased by a nominal 1%. Palsar continues to provide better results than Radarsat-2. 4.5. Combining multiple wavelength radar images The recent increase in types of spaceborne radar allowed this analysis to include classify- ing radar images from two different portions of the electromagnetic spectrum. The Palsar sensor collects data in the L-band, while the Radarsat-2 collects data in the C-band. Both of the images used in this analysis were despeckled with a 5 × 5 window. The combined Washington Palsar and Radarsat-2 images had a slight increase in overall accuracy to 78% (Table 7), when compared to the 77% overall accuracy result that was achieved when classifying the despeckled Palsar image alone. However, there are some interesting class differences with less range in producer’s and user’s accuracies. For the combined radar, the producer’s accuracies varied from 71% to 94% while in the original Palsar, the range was from 49% to 100%. The suburban class for original Palsar Table 6. Error matrices of Washington original despeckled imagery combined with the best derived texture measure. Water Forest Suburban Urban Water 4177 0 0 0 1.000 Forest 3 3535 1859 52 0.649 Suburban 787 1203 1867 718 0.408 Urban 2 155 860 4369 0.811 0.841 0.722 0.407 0.850 0.712 Palsar Original and Texture 17 × 17 Water 4959 0 0 0 1.000 Forest 0 3551 1048 4 0.771 Suburban 0 1226 2209 539 0.556 Urban 10 116 1329 4596 0.760 0.998 0.726 0.482 0.894 0.782 International Journal of Image and Data Fusion 11 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 14. increased from 49% to 71% in the fused radar. These reduced class-by-class variations support the integration of multi-wavelength radar for land cover/use mapping. 4.6. Combining optical and radar images The data acquired for this study provide an opportunity to integrate the radar and texture measures with the Aster multispectral image. The fusion of different radar wavelengths with optical imagery in land cover/use classification is a relatively new area of research (Santos and Messina 2008, Amarsaikhan et al. 2012). This may be in part due to the much lower accessibility of radar data compared to optical data available to scientists and researchers alike. The Washington Aster image only land cover/use classification overall accuracy was 90%. Whereas the best classification accuracy for a radar data set was achieved through the use of the Palsar, 77%. The addition of the Palsar imagery to the Aster increased the overall accuracy to 93% (Table 8). The Radarsat-2 texture measure with a window size of 13 × 13 produced the best overall accuracy result for the Washington imagery texture Table 7. Washington multi-wavelength error matrices combining Radarsat 2 and Palsar. Water Forest Suburban Urban Water 4650 0 0 0 1.000 Forest 0 3269 699 75 0.809 Suburban 0 1596 3252 1007 0.555 Urban 319 28 635 4057 0.805 0.936 0.668 0.709 0.789 0.777 Table 8. Error matrices of Washington multispectral optical, radar and texture combinations. Water Forest Suburban Urban Water 4913 0 0 0 1.000 Forest 0 4442 184 4 0.959 Suburban 0 432 4142 427 0.828 Urban 56 19 260 4708 0.934 0.989 0.908 0.903 0.916 0.929 Aster and Radarsat-2 Texture 13 × 13 Water 4805 0 0 0 1.000 Forest 0 3887 209 0 0.949 Suburban 16 931 4162 465 0.747 Urban 148 75 215 4674 0.914 0.967 0.794 0.908 0.910 0.895 Aster and Palsar and Radarsat-2 Texture 17 × 17 Water 4966 0 0 0 1.000 Forest 0 3793 155 0 0.961 Suburban 0 1034 3912 359 0.737 Urban 3 66 519 4780 0.890 0.999 0.775 0.853 0.930 0.891 12 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 15. measures (71%). This layer was then combined with the Aster data, which yielded an overall accuracy of 90%. Finally, the best texture measure, which again, was the Radarsat-2 with a window size of 13 × 13 and the best of the original radar, which was the Palsar image, were layer stacked with the Aster data. These combined data layers were analysed and the land cover/use classification was generated to produce an overall accuracy of 89%. For the Washington, DC, location and its land cover/use classes, the combination of the radar or derived radar texture measures did not improve overall accuracies much over the original Aster. Given that the Aster independently had a classification of 90%, there was little opportunity for improvement. There are, however, some specific class improve- ments with the sensor fusion, such as the producer’s accuracy for water increasing from 87% to 100% in Aster with sensor integration. 5. Summary Land cover/use information represents an important resource for tracking humans’ impact on the earth’s surface. Without adequate land cover/use information, decision-makers often fail to make reliable decisions concerning the sustainable planning and management of land resources. This in turn can have disabling effects, both medium and long term, on countries’ self-sustainability. The most common method of collecting land cover/use data is the use of optical sensors on board aerial and spaceborne platforms. These methods, although largely successful, continue to be impacted by cloud cover, especially low tropical and high- latitude locations, presenting a challenge for continuous observation and monitoring of land resources. Radar, still a relatively new area of research to land cover/use mapping (Hoekman et al. 2010), has the potential to overcome these challenges. The electromag- netic waves of radar are almost not influenced by atmospheric interference and provide all-weather land observation data. As these data become increasingly available, it is expected that there will be an increased need for studies examining the suitability of radar, both as a surrogate and as a complementary source of optical data, for land cover/ use mapping in different parts of the world. In this study, the potential of using radar for supporting land cover/use mapping was examined. Of the two radar sensors evaluated, the original Palsar data produced much better classification results when compared to Radarsat-2. Texture, a common tool used widely in land cover research, was also evaluated. Results showed that derived radar texture values were variable in their ability to improve classifications. The Radarsat-2 texture measures resulted in better classifications than the despeckled original image by 12%. Further analysis showed that overall, the Palsar C-band did not perform as well as the Radarsat-2L-band when generating classifications while using a texture measure. These results are consistent with the findings of Li et al. (2012), comparing L-band and C-band radar over Brazil, a humid tropical area, relative to the Washington, DC, location examined in this research. Also interesting and consistent with the literature, the best classification accuracy improvements were seen in the urban class using Radarsat-2 imagery texture. Urban spaces, known for their difficulty in mapping because of their complex mix of human-transformed properties, can therefore benefit from the use of radar to support land cover/use mapping of this class. This is especially important given the accelerated growth of many such areas over the last 50 years. The combination of radar and radar-derived texture measures was also explored. The classification results of the combined original radar and texture images showed varied International Journal of Image and Data Fusion 13 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 16. increases when compared to the overall accuracy of the despeckled-only radar image classifications. There was virtually no improvement for Palsar but a 7% increase from the best Radarsat-2 texture when the original was combined with this measure. When the radar images from two different portions of the electromagnetic spectrum were combined, this resulted in no improvement over the use of independent Palsar. However, the initial Palsar results were quite good. Finally, this study in part reinforced the value of optical imagery. The results for the classifications using the independent Aster imagery were excellent. Even when optical imagery is available, radar imagery can help improve the classification results. When the radar imagery was added to the Aster optical image, the overall accuracy improved, but marginally, from 90% to 93%. However, the water producer class accuracies were higher than the optical alone and the urban equal to the optical. For the Washington, DC, data, independent radar sensor land cover/use classification accuracies do not compete with that of optical imagery. However, the overall accuracy of radar results of 78% would be very useful in those regions of the world where cloud cover or other factors limit the avail- ability of optical acquisition. Several limitations were also identified during the course of this research, which form part of improvements for future research. First, only few generic land cover/ use classes were examined. Although results were generally good for the combination of optical and radar data, both overall and for individual classes, the classes selected may not be appropriate for other areas of study which may have different definitions for these classes. Also, these classes may not be appropriate for better understanding the overall influencers of land cover/use change taking place on the ground. Investigation of more detailed classes is therefore needed, which may lead to results different from those obtained in this study. Second, only one classification method was investigated, the ML decision rule. Other methods of classification, such as support vector machines and random forest, should also be investigated and results compared for determining the most suitable method. Third, this study utilised only one measure for texture. Additional measures should be examined and compared to produce more conclusive results as to the most suitable texture measure for use. Finally, several studies have already investigated the use of multidate radar as a possible source for improving classification results (Le Hegarat- Mascle et al. 2000, Shao et al. 2001, Chust et al. 2004). Further examination of these types of data, both as a single data source and as a complementary source to optical data, should also be investigated. Acknowledgements The authors would like to thank the following organisations for providing the imagery used in this research. Radarsat-2 images were provided by the Canadian Space Agency under project 3126 of the Science and Operational Application Research for RADARSAT-2 programme. The Alaska Space Facility, under sponsorship from the NASA, provided the PALSAR imagery. Finally, the NASA Land Processes Distributed Active Archive Center at the USGS/Earth Resources Observation and Science (EROS) Center provided the ASTER imagery. Disclosure statement No potential conflict of interest was reported by the authors. 14 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 17. Funding Additional support was provided through grants received by the Department of Geography and Geoinformation Science at George Mason University. References Alparone, L., et al., 2004. Landsat ETM+ and SAR image fusion based on generalized intensity modulation. IEEE Transactions on Geoscience and Remote Sensing, 42 (12), 2832–2839. doi:10.1109/TGRS.2004.838344 Al-Tahir, R., Richardson, T., and Mahabir, R., 2009. Advancing the use of earth observation systems for the assessment of sustainable development. Association of Professional Engineers of Trinidad Tobago, 38, 6–15. Al-Tahir, R., Saeed, I., and Mahabir, R., 2014. Application of remote sensing and GIS technologies in flood risk management. In: D.D. Chadee, J.M. Sutherland, and J.B. Agard, eds. Flooding and climate change: sectorial impacts and adaptation strategies for the Caribbean region. Hauppauge, NY: Nova, 137–150. Amarsaikhan, D., et al., 2007. The integrated use of optical and InSAR data for urban land-cover mapping. International Journal of Remote Sensing, 28 (6), 1161–1171. doi:10.1080/ 01431160600784267 Amarsaikhan, D., et al., 2012. Comparison of multisource image fusion methods and land cover classification. International Journal of Remote Sensing, 33 (8), 2532–2550. doi:10.1080/ 01431161.2011.616552 Anderson, C., 1998. Texture measures in SIR-C images. IEEE International on Geoscience and Remote Sensing Symposium Proceedings, 1998. IGARSS ‘98, 3, 1717–1719. doi:10.1109/ IGARSS.1998.692452 Anderson, J.R., et al., 1976. A land use and land cover classification system for use with remote sensor data. US Geological Survey Professional Paper, No. 964. Washington, DC, p. 28. Asner, G., 2001. Cloud cover in Landsat observations of the Brazilian Amazon. International Journal of Remote Sensing, 22 (18), 3855–3862. doi:10.1080/01431160010006926 Bouchemakh, L., et al., 2008. A comparative study of speckle filtering in polarimetric RADAR SAR images. In: 3rd international conference on information and communication technologies: from theory to applications, ICTTA 2008, 1–6. doi:10.1109/ICTTA.2008.4530040 Campbell, J. and Wynne, R., 2012. Introduction to remote sensing. 5th ed. New York, NY: Guilford Press, 626. Canadian Space Agency, 2008. Radarsat – 1 [online]. Available from: http://www.space.gc.ca/asc/ eng/satellites/radarsat1/default.asp [Accessed 2008]. Cervone, G. and Haack, B., 2012. Supervised machine learning of fused RADAR and optical data for land cover classification. Journal of Applied Remote Sensing, 6 (1), 063597. doi:10.1117/1.JRS.6.063597 Champion, I., et al., 2008. RADAR image texture as a function of forest stand age. International Journal of Remote Sensing, 29 (6), 1795–1800. doi:10.1080/01431160701730128 Chen, D., Stow, D., and Gong, P., 2004. Examining the effect of spatial resolution and texture window size on classification accuracy: an urban environment case. International Journal of Remote Sensing, 25 (11), 2177–2192. doi:10.1080/01431160310001618464 Chust, G., Ducrot, D., and Pretus, J.L., 2004. Land cover discrimination potential of radar multi- temporal series and optical multispectral images in a Mediterranean cultural landscape. International Journal of Remote Sensing, 25 (17), 3513–3528. doi:10.1080/ 0143116032000160480 Congalton, R. and Green, K., 1999. Assessing the accuracy of remotely sensed data: principles and practices. 1st ed. Boca Raton, FL: CRC Press, 137. Dekker, R.J., 2003. Texture analysis and classification of ERS SAR images for map updating of urban areas in The Netherlands. IEEE Transactions on Geoscience and Remote Sensing, 41 (9), 1950–1958. doi:10.1109/TGRS.2003.814628 Dell’Acqua, F., Gamba, P., and Lisini, G., 2003. Improvements to urban area characterization using multitemporal and multiangle SAR images. IEEE Transactions on Geoscience and Remote Sensing, 41 (9), 1996–2004. doi:10.1109/TGRS.2003.814631 Ehlers, M., 1991. Multisensor image fusion techniques in remote sensing. ISPRS Journal of Photogrammetry and Remote Sensing, 46 (1), 19–30. doi:10.1016/0924-2716(91)90003-E International Journal of Image and Data Fusion 15 Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 18. Eshan, G., 2011. Use of radar remote sensing for land use dynamic monitoring in South West Coast of Caspian sea. International Journal of Geomatics and Geosciences, 2 (2), 472–480. Haack, B. and Bechdol, M., 2000. Integrating multisensor data and RADAR texture measures for land cover mapping. Computers & Geosciences, 26 (4), 411–421. doi:10.1016/S0098-3004(99) 00121-1 Haack, B., Mahabir, R., and Kerkering, J., 2014. Remote sensing-derived national land cover land use maps: a comparison for Malawi. Geocarto International, 1–23. doi:10.1080/ 10106049.2014.952355 Hall, D.L. and Llinas, J., 1997. An introduction to multisensor data fusion. Proceedings of the IEEE, 85 (1), 6–23. doi:10.1109/5.554205 Hansen, M., Dubayah, R., and DeFries, R., 1996. Classification trees: an alternative to traditional land cover classifiers. International Journal of Remote Sensing, 17 (5), 1075–1081. doi:10.1080/01431169608949069 Haralick, R., Shanmugam, K., and Dinstein, I., 1973. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3 (6), 610–621. doi:10.1109/ TSMC.1973.4309314 Henderson, F., et al., 2002. Evaluation of SAR-optical imagery synthesis techniques in a complex coastal ecosystem. Photogrammetric Engineering and Remote Sensing, 68 (8), 839–846. Herold, M., Liu, X., and Clarke, K., 2003. Spatial metrics and image texture for mapping urban land use. Photogrammetric Engineering and Remote Sensing, 69 (9), 991–1001. doi:10.14358/ PERS.69.9.991 Herold, N., Haack, B., and Solomon, E., 2004. An evaluation of RADAR texture for land use/cover extraction in varied landscapes. International Journal of Applied Earth Observation and Geoinformation, 5 (2), 113–128. doi:10.1016/j.jag.2004.01.005 Hoekman, D., Vissers, M., and Wielaard, N., 2010. PALSAR wide-area mapping of Borneo: methodology and map validation. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 3 (4), 605–617. doi:10.1109/JSTARS.2010.2070059 JAXA, 2006. Image data acquired by the Palsar onboard the “Daichi”. Japanese Aerospace Exploration Agency [online]. Available from: http://www.jaxa.jp/press/2006/02/20060217_dai- chi_e.html [Accessed February 2008]. Kurosu, T., et al., 1999. Texture statistics for classification of land use with multitemporal JERS-1 SAR single look imagery. IEEE Transactions on Geoscience and Remote Sensing, 37 (1), 227– 235. doi:10.1109/36.739157 Le Hégarat-Mascle, S., Bloch, I., and Vidal-Madjar, D., 1998. Introduction of neighborhood information in evidence theory and application to data fusion of radar and optical images with partial cloud cover. Pattern Recognition, 31 (11), 1811–1823. doi:10.1016/ S0031-3203(98)00051-X Le Hegarat-Mascle, S., et al., 2000. Land cover discrimination from multitemporal ERS images and multispectral Landsat images: a study case in an agricultural area in France. International Journal of Remote Sensing, 21 (3), 435–456. doi:10.1080/014311600210678 Li, G., et al., 2012. A comparative analysis of ALOS PALSAR L-band and RADARSAT-2 C-band data for land-cover classification in a tropical moist region. ISPRS Journal of Photogrammetry and Remote Sensing, 70, 26–38. doi:10.1016/j.isprsjprs.2012.03.010 Lloyd, C., et al., 2004. A comparison of texture measures for the per-field classification of Mediterranean land cover. International Journal of Remote Sensing, 25 (19), 3943–3965. doi:10.1080/0143116042000192321 Lu, Y., et al., 1996. Adaptive filtering algorithms for SAR speckle reduction. IEEE Geoscience and Remote Sensing Symposium, Proceedings, IGARSS 1996, 1, 67–69. Luo, R.C., Yih, C., and Su, K.L., 2002. Multisensor fusion and integration: approaches, applica- tions, and future research directions. IEEE Sensors Journal, 2 (2), 107–119. doi:10.1109/ JSEN.2002.1000251 Maghsoudi, Y., Collins, M., and Leckie, D., 2012. Speckle reduction for the forest mapping analysis of multi-temporal Radarsat-1 images. International Journal of Remote Sensing, 33 (5), 1349– 1359. doi:10.1080/01431161.2011.568530 Mahabir, R. and Al-Tahir, R. 2008. The role of spatial data infrastructure in the management of land degradation in small tropical Caribbean Islands [online]. In: Tenth International Conference for Spatial Data Infrastructure, 25–29 February, St. Augustine, Trinidad, 25–29. Available from: http://www.gsdi.org/gsdiconf/gsdi10/prog_details.html [Accessed 15 January 2015]. 16 T. Idol et al. Downloadedby[GeorgeMasonUniversity]at11:5723March2015
  • 19. Maillard, P., 2003. Comparing texture analysis methods through classification. Photogrammetric Engineering and Remote Sensing, 69 (4), 357–367. doi:10.14358/PERS.69.4.357 Nyoungui, A., Tonye, E., and Akono, A., 2002. Evaluation of speckle filtering and texture analysis methods for land cover classification from SAR images. International Journal of Remote Sensing, 23 (9), 1895–1925. doi:10.1080/01431160110036157 Pereira, L.P., et al., 2013. Optical and radar data integration for land use and land cover mapping in the Brazilian Amazon. GIScience & Remote Sensing, 50 (3), 301–321. doi:10.1080/ 15481603.2013.805589 Pohl, C. and Van Genderen, J.L., 1998. Review article multisensor image fusion in remote sensing: concepts, methods and applications. International Journal of Remote Sensing, 19 (5), 823–854. doi:10.1080/014311698215748 Richards, J.A. and Jia, X., 2005. Remote sensing and digital image analysis. 5th ed. Berlin: Springer, 194–199. Santos, C. and Messina, J., 2008. Multi-sensor data fusion for modeling African Palm in the Ecuadorian Amazon. Photogrammetric Engineering and Remote Sensing, 74 (6), 711–723. doi:10.14358/PERS.74.6.711 Sawaya, S., et al., 2010. Land use/cover mapping with quad-polarization RADAR and derived texture measures near Wad Madani, Sudan. GIScience & Remote Sensing, 47 (3), 398–411. doi:10.2747/1548-1603.47.3.398 Shao, Y., et al., 2001. Rice monitoring and production estimation using multitemporal RADARSAT. Remote Sensing of Environment, 76 (3), 310–325. doi:10.1016/S0034-4257(00)00212-1 Sheoran, A. and Haack, B., 2013. Classification of California agriculture using quad polarization radar data and Landsat Thematic Mapper data. GIScience and Remote Sensing, 50 (1), 50–63. doi:10.1080/15481603.2013.778555 Shiraishi, T., et al., 2014. Comparative assessment of supervised classifiers for land use–land cover classification in a tropical region using time-series PALSAR mosaic data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 7 (4), 1186–1199. doi:10.1109/JSTARS.2014.2313572 Solberg, A. and Anil, K., 1997. Texture fusion and feature selection applied to SAR imagery. IEEE Transactions on Geosciences and Remote Sensing, 10 (6), 989–1003. doi:10.1109/36.563288 Töyrä, J., Pietroniro, A., and Martz, L., 2001. Multisensor hydrologic assessment of a freshwater wetland. Remote Sensing of Environment, 75, 162–173. doi:10.1016/S0034-4257(00)00164-4 USGS, 1988. River basins of the United States: the Potomac. Denver, CO: US Geological Survey, 9. Villiger, E., 2008. Radar and multispectral image fusion options for improved land cover classifica- tion. PhD dissertation. George Mason University. Waske, B. and Van Der Linden, S., 2008. Classifying multilevel imagery from SAR and optical sensors by decision fusion. IEEE Transactions on Geoscience and Remote Sensing, 46 (5), 1457–1466. doi:10.1109/TGRS.2008.916089 International Journal of Image and Data Fusion 17 Downloadedby[GeorgeMasonUniversity]at11:5723March2015