SlideShare ist ein Scribd-Unternehmen logo
1 von 63
Downloaden Sie, um offline zu lesen
PHOTOGRAMMETRIC
ANALYSIS:
UNMANNED AERIAL
VEHICLES & GLOBAL
POSITIONING
Mark Wade 3119753
December 2014
THESIS: University of
Newcastle
1 | P a g e
Abstract
The purpose of this thesis was to investigate the parameters involved in
photogrammetry and discover the uses for UAV technology in combination with
photogrammetry. This was achieved by the following methods:
Investigate and understand the geometric principals that form the algorithms and
provide the basis for 3D scene reconstruction and modelling.
Determine the accuracies required of measured auxiliary data in order to have
accurate georeferenced scene models without the need for ground control points.
Determine the camera and image related properties that have an effect on the overall
model accuracy and seek to optimise these through the use of a simple flight
planning program for photogrammetric surveys.
The application of UAV photogrammetry is gaining momentum and as technologies
improve and are downsized, the limitations of UAV surveys are diminished. There is
sound basis to the algorithms relating to photogrammetry and 3D scene
reconstructions. The theorised application of UAV photogrammetry with direct
measurement of auxiliary data has been proved and developed by use of integrated
systems however to date, produces raw measurements. The flight planning analysis,
image properties and camera parameters that allow for accurate image acquisition
have been investigated and a basic program developed to assist in determining
suitable flight plans for photogrammetric surveys.
2 | P a g e
Table of Contents
Table of Contents........................................................................................................2
1 Introduction...........................................................................................................5
1.1 Background ...................................................................................................5
1.2 Applications of Photogrammetry ..................................................................6
1.3 Scope .............................................................................................................8
2 Literature Review .................................................................................................8
2.1 Definitions.....................................................................................................8
2.2 History of Photogrammetry.........................................................................10
2.2.1 Invention of Aerial Photogrammetry ...................................................11
2.2.2 How Aerial Photogrammetry works ....................................................11
2.2.3 Close Range Photogrammetry..............................................................13
2.3 Theory of Photogrammetry .........................................................................13
2.3.1 Structure from Motion ‘SfM’...............................................................14
2.3.2 Interior Camera Orientation.................................................................17
2.4 UAV’s and Aerial Photogrammetry............................................................17
2.4.1 Positioning and Camera Attitude .........................................................18
2.4.2 Measuring Auxiliary data.....................................................................20
3 Flight Plan Optimisation.....................................................................................25
3.1 Ground/Object Coverage.............................................................................26
3.2 Number of Photographs for UAV Survey...................................................29
3.3 Pixel Size and Accuracies ...........................................................................32
3.3.1 Planimetric Accuracy...........................................................................33
3.3.2 Height Accuracy...................................................................................35
3.4 Estimating Total UAV Survey Time...........................................................38
3.5 Camera Settings...........................................................................................41
3.5.1 Camera Sensors....................................................................................42
3.5.2 Shutter Speed .......................................................................................44
3.5.3 Aperture................................................................................................46
3.5.4 ISO Sensitivity .....................................................................................47
4 Conclusions and Recommendations...................................................................48
4.1 Further Research..........................................................................................50
5 Works Cited........................................................................................................52
3 | P a g e
6 Appendix A.........................................................................................................56
6.1 USER GUIDE WADE_flight2014..............................................................56
6.1.1 To Begin...............................................................................................56
6.1.2 Defining the survey Area .....................................................................57
6.1.3 Selecting appropriate Flight plan .........................................................59
7 Appendix B.........................................................................................................61
7.1 Cameras used in Comparisons ....................................................................61
Table of Figures
Figure 1: Industrial Surveying point cloud 6
Figure 2: Industrial Photogrammetry accuracy............................................................6
Figure 3: Mount St. Helens in Washington, USA........................................................7
Figure 4: Transformations and Rotation matrix..........................................................9
Figure 5: Leonardo Da Vinci .....................................................................................10
Figure 6: Felix Nadir 1820-1910................................................................................11
Figure 7: Wild RC D10 Stereo plotter (NERC Science of The Environment, 2010) 12
Figure 8: Relationship between a stereo pair of images (Kniest, 2013) ....................12
Figure 9: Epipolar plane (Stojakovic, 2008)..............................................................14
Figure 10: SfM bundle adjustment geometry (Geodetic Systems Inc., 2014)...........15
Figure 11: Dig Tsho SfM data products Oblique view showing per-cell (1 m2) point
densities. Data transformed to UTM Zone 45N geographic coordinate system........16
Figure 12: (Eisenbeis, 2009) ......................................................................................18
Figure 13: Professor Friedrich Ackerman (Ganjanakhundee, 2013) .........................19
Figure 14: Prototype of GPS receiver integrated with camera (G.Forlani, L. Pinto, R.
Roncella, D. Pagliari, 2013).......................................................................................22
Figure 15: KINGSPAD layout (J.Skaloud, M.Cramer, K.P.Schwarz, 1996) ............24
Figure 16: Flight Optimisation for UAV surveys (Wade, 2014) ...............................26
Figure 17: Camera showing Sensor (Digital Photography Review, 1998-2014) ......26
Figure 18: Focal length and Principal point (ExposureGuide.com, 2014) ................27
Figure 19: Comparisons between sensor sizes (GPS photography.com, 2012).........28
Figure 20: Ground Sampling Distance (Digital Photography Review, 1998-2014)..29
Figure 21: Image overlap along a strip (AdamTechnology, 2012)............................30
Figure 22: Example for Image overlap (Wade, 2014) ...............................................31
Figure 23: Ground pixel size vs Dtarget (Wade, 2014).................................................32
Figure 24: Error ellipse (AdamTechnology, 2012)....................................................34
Figure 25: Gp, Planimetric accuracy and Height accuracy vs Dtarget (Wade, 2014)...34
4 | P a g e
Figure 26: Comparison of lenses Sony SLT-A99 (Wade, 2014)...............................36
Figure 27: Number of images for different lenses (Wade, 2014) ..............................36
Figure 28: Sony SLT-A99 35mm focal length (WADE_flight2014)........................37
Figure 29: Sony SLT-A99 20mm focal length (WADE_flight2014)........................37
Figure 30: Dji Ground Station 4.0 Waypoint editing.................................................39
Figure 31: Input parameters for WADE_flight2014..................................................40
Figure 32: WADE_flight2014 outputs for survey of football field using Canon EOS
6D...............................................................................................................................40
Figure 33: Camera Comparisons (Wade, 2014).........................................................41
Figure 34: Camera Settings (Digital Camera World, 2012) ......................................42
Figure 35: CCD (left) and CMOS (right) image sensors...........................................43
Figure 36: Canon 5D Internal mechanisms (2000-2013 Little Guy Media, 2002)....44
Figure 37: WADE_flight2014 output showing all parameters ..................................45
Figure 38: Camera Aperture (ExposureGuide.com, 2014) ........................................46
Figure 39: Depth of field (ExposureGuide.com, 2014) .............................................46
Figure 40: Changing ISO sensitivity (macrominded.com, 2014) ..............................47
Figure 42: A2 flight controller components...............Error! Bookmark not defined.
5 | P a g e
1 Introduction
1.1 Background
Aerial Photogrammetry has been used as a reliable source for mapping applications
for several decades. Distances and elevations are determined through the use of
formulas and complicated computer algorithms such as Structure for Motion (SfM)
software packages. The high cost and time associated with aerial photography has
limited its use on small scale applications. A small aerial photo set for use in
volumetric calculations and other survey data can costs in excess of $2000 dollars
and if data is corrupted or missed in the images it would require the aircraft to take
the entire set again. As well as cost and time, the accuracy of aerial photogrammetry
is limited and useful only when dealing with applications that require low accuracy.
Recent advancements in digital photography, Global Navigation Satellite Systems
(GNSS) and computer modelling software have made the process of gaining quality
data from an aerial survey simpler and quicker. The applications are extensive for
photogrammetry as it can be used to create realistic models of real world locations.
With the addition of aerial vehicle technology this allows data to be acquired easily
in remote sites.
The technology of Unmanned Aerial Vehicles (UAV) has greatly improved over the
past decade and is becoming more cost effective and accessible to civilians.
According to the Unmanned Vehicle Systems International (UVSI) definition, “A
UAV is a generic aircraft designed to operate with no human pilot on board.” (UVS,
2014). There are several variables to consider when combining UAV technology
with photogrammetry. The defining elements of a survey will depend on the data
required and the budget.
This Thesis investigates the applications of photogrammetry combined with new
UAV drone technologies for the purpose of acquiring accurate survey data that can
be used for a variety of engineering, environmental and surveying purposes.
6 | P a g e
1.2 Applications of Photogrammetry
The Benefits of photogrammetry in environmental and engineering applications are
extensive and always increasing. The accuracy of the science has improved and
presently, in the right environment with computer aided software its accuracies rival
and even surpass that of other surveying instruments.
Figure 1: Industrial Surveying point cloud
Figure 1 shows a point cloud generated using a Structure from Motion (SfM)
software package Agisoft PhotoScan. The photogrammetric survey was undertaken
as part of a ‘check’ for an industrial survey at the University of Newcastle, NSW
Australia in 2014. The images were taken at random with only a basic understanding
of correct camera location and geometry. The results achieved after checking the
coordinates were of high accuracy. Figure 2 shows the accuracy achieved (column
5) for the check. As shown, the errors were less than 0.5mm for most coordinates.
Figure 2: Industrial Photogrammetry accuracy
For a method that took a fraction of the time compared to the actual survey, these
results are very promising for the automation and functionality of photogrammetric
software.
7 | P a g e
With photogrammetry in applications such as ongoing monitoring in areas like mine
sites or natural activity zones, once the initial set up has been completed the
monitoring can be far quicker and more cost effective than traditional surveying.
Western Washington University graduate student Angela Diefenbach is pioneering a
new method for the study and evaluation of active volcanoes and their risk for
eruption. The monitoring team is using photogrammetry from static cameras, aerial
photography and commercially available software to build accurate three
dimensional models of the volcano. The models can then be compared to past
models to determine the change in volume and rate of growth, key indicators of
volcanic activity (Diefenbach, et al., 2006).
Figure 3: Mount St. Helens in Washington, USA
With the addition of UAV technology to photogrammetry, many previously
unavailable or inhospitable areas are now able to be surveyed without the need for
human interaction. Sensitive areas such as sacred indigenous sites and archaeological
sites will benefit from the increasing technology advancements that are making UAV
and photogrammetry invaluable. Accurate models and measurements can be realised
on an untouched area and the UAV allows the images to be taken closer and at more
angles than ever before. One example of this was in the Vent-i-mig-lia project Italy.
An archaeological site where aerial and terrestrial photogrammetry was combined to
achieve a Root mean square error of around 3mm in the X and Y coordinates of the
model using only 5 control points (Erica Nocerino, Fabio Mnna, Fabio Remondino,
Renato Saleri, 2013).
In 2012 testing took place involving a UAV system and photogrammetry to track
and model the growth and rate of the British Ice Sheet. The overall accuracy was
found to be 0.5m, which, for the purpose of mapping low amplitude bedforms was
suitable (Clayton, 2012).
8 | P a g e
1.3 Scope
The scope of this thesis was to research the achievable accuracies of small UAV
systems combined with GNSS/INS in photogrammetry without the use of GCP’s.
This was achieved by looking at various publications and experiments already
undertaken and investigating the theoretical limitations and constraints that affect the
accuracy of field data.
This thesis demonstrates how to optimise a flight plan for accurate and efficient data
collection in order to streamline a photogrammetric survey. The production of
several charts and simplified equations presented in this thesis assists in determining
accuracy for a survey and allows a user to determine flight time and data required
produce the desired results.
The examination of camera and lens systems allows optimal combinations to be
determined and the methods of determining an appropriate relationship for individual
purposes is also presented.
2 Literature Review
2.1 Definitions
Principal Distance:
The principal distance is defined through internal camera calibration and it is defined
as the focal length of a lens at infinity focus. It is the distance from the perspective
centre of the lens to the image plane (Philipson & Philpot, 2012).
Principal Point of Auto-Collimation:
The PPA is the point on the image plane at which an image would originate if the
focal plane of the camera was perfectly perpendicular to the direct axial ray coming
through the perspective center of the lens (Karara, 1998).
Fiducial Centre:
Principal point or Indicated Principal Point IPP are also terms used to describe this
internal camera parameter. It is defined as the location on the image plane or image
9 | P a g e
sensor where intersecting rays from opposing fiducial marks intersect. This would
ideally be located in the center of the image plane, however it is found during
interior orientation calibration of the camera and lens and is affected by lens
distortion. The distance rarely exceeds 1mm from PPA (x0, y0) to IPP (xp, yp)
(Karara, 1998).
Radial Lens Distortion:
If the image formed by an ‘off-axis’ target is not in the position that a ‘perfect lens’
would produce but is either radially closer or further from the PPA then it is said to
have been radially distorted. Generally radial distortion can be graphically
represented for a lens with distortion in micrometers plotted against radial distortion
in millimeters. For photogrammetric purposes this can be coincident with the PPA
and symmetric.
Nadir Point:
The point on the image which corresponds to the ground nadir. The point at which a
vertical plumb line from the perspective center of the lens to the ground nadir
intersects the image.
Rotation Matrix:
Denoted as the matrix ‘R’ in collinearity equations for exterior camera orientation.
The rotation matrix describes the degree of rotation of the camera about the 3 axes
yaw, pitch and roll (ω, ф, К) or azimuth, tilt and swing (a, s, t). Often shown as the
‘m’ matrix.
Figure 4: Transformations and Rotation matrix
10 | P a g e
2.2 History of Photogrammetry
There is no universally accepted definition for the word photogrammetry; however a
very apt description by W.D. Philpot from Cornell University explains
photogrammetry as:
“Photo-gram-metry” (light-drawing-measurement)
“The art, science and technology of obtaining reliable spatial information about
physical objects and the environment through processes of recording, measuring and
interpreting photographic images and pattern of recorded radiant electromagnetic
energy and other phenomena.” (Philipson & Philpot, 2012).
Photogrammetry is the science of using photographs taken with certain specifications
and observing measurements accurately from the images. The most popular and
widely used applications for photogrammetry include land maps (aerial
Photogrammetry), topographic maps (aerial photogrammetry) and three dimensional
modelling (close-range photogrammetry). Close range and Aerial are the two
categories that photogrammetry is usually divided into. This paper focuses on UAV
photogrammetry from a multi rotor drone which is unique in that it can fit into both
categories.
The concept of photogrammetry dates back as far as 1480 where Leonardo da Vinci
wrote;
“Perspective is nothing else than
the seeing of an object behind a
sheet of glass, smooth and quite
transparent, on the surface of
which all the things may be
marked that are behind the glass.
All things transmit their images
to the eye by pyramidal lines, and
these pyramids are cut by the said
glass. The nearer to the eye these
are intersected, the smaller the
image of their cause will appear”
(Doyle, 1964).
Over the next three to four hundred years the science slowly progressed and a few
significant developments paved the way for it to be widely used and accepted once
camera systems could produce the required images used in the measurements. There
Figure 5: Leonardo Da Vinci
11 | P a g e
were four main periods of development following roughly a 50 year cycle each
progression (Center for Photogrammetric Training, 2008).
 Plane table Photogrammetry from circa 1850 to 1900
 Analogue Photogrammetry from circa 1900 to 1960
 Analytical Photogrammetry circa 1960 to present and,
 Digital Photogrammetry Present
2.2.1 Invention of Aerial Photogrammetry
With the invention of photography and the ability to take exposures during flight, the
amalgamation of this technology with the military quickly followed. In 1855 a
balloon flying at a height of around 80m obtained the first aerial photograph.
Approximately four years later the same pilot (Felix Nadir) was employed by the
French Emperor Napoleon to acquire reconnaissance photographs in preparation for
the Battle of Solferino. In order to transfer points from the photo to a map, a grid
overlay was used and the battle plans were developed (Center for Photogrammetric
Training, 2008).
Figure 6: Felix Nadir 1820-1910
2.2.2 How Aerial Photogrammetry works
Aerial Photogrammetry comes from images taken by a camera mounted in an aircraft
usually directed at the ground. In order to obtain measurements from the images, a
series of overlapping photographs are required along a flight path over the target
area. In the early days before computer software became available, the images were
processed in a stereo-plotter (an instrument that allowed an operator to view two
photos in a stereo view). Figure 7 below shows an example of a typical stereo-
plotter.
12 | P a g e
Figure 7: Wild RC D10 Stereo plotter (NERC Science of The Environment, 2010)
The images were aligned and the stereo-plotter allowed the operator tilt, rotate and
scale the images using certain known details about each image. This process is now
very much streamlined by the use of computer software and mathematical modelling
quickly aligns the photos into useable ‘stereo pairs’. Once the rotations and scale
have been corrected, measurements can be taken between the images to determine
how far apart the camera positions were as well as measuring planimetric distances
and elevations on the objects shown in the images. Figure 8 shows the relationship
between a stereo pair of photographs.
Figure 8: Relationship between a stereo pair of images (Kniest, 2013)
This method is no longer used as computer programs provide faster and more
accurate results, taking out any human error involved in the measurements.
As technology in photography and flight systems have progressed, so too has the
ability to create accurate maps using photogrammetry. The main benefit of aerial
photogrammetry at present is that it has the ability to cover a large area far quicker
than field a survey could. This however, comes at the sacrifice of accuracy as the
vehicle is far away from the object and provides only crude measurements in small
13 | P a g e
scale applications. A closer look into how aerial photogrammetry is used presently is
presented in 2.4.
2.2.3 Close Range Photogrammetry
Close range or Terrestrial photogrammetry is simply defined as any photogrammetry
other than aerial. Typically the output of a close range photogrammetric survey is in
the form of a 3D model/point cloud from which measurements can be deduced.
Anything can be modelled from engineering structures, forensic scenes, mines,
archaeological findings even living beings.
Close range photogrammetry has the same origins as aerial due to the nature of using
film and the camera. Precision advanced as methods for measurements and
exposures were enhanced. Due to the relative closeness of objects from the lens, the
resulting data is sharper, clearer and of higher quality when compared to aerial. As
the methods for deriving the equations of measurements involves a distance from the
object, precision and achievable accuracies also increase for close range.
2.3 Theory of Photogrammetry
The human sight is based on a stereo view whereby the physical environment of an
individual is given scale and depth through the intersecting light rays entering the
eyes. The brain gathers the incoming information and recognises what it is seeing as
a three dimensional space (Doyle, 1964). Structure from motion (SfM) computer
software uses complex algorithms and mathematical models to replicate this process
and define the location of objects within image. The software provides an arbitrary
coordinate system to relate the image points.
The interior calibration of the camera and lens system is important as this defines the
reference coordinate system that photogrammetric measurements are based on. Most
SfM programs have an interior camera calibration function built in to simplify this
process. By calibrating a camera and lens, details such as principal distance, scale,
fiducial centre (principal point) and lens distortion can be determined under varying
conditions.
In order to model a two dimensional image in three dimensions, a reference
coordinate system must be known between pairs of images. This coordinate system
can be deduced using the known camera and image parameters. Epipolar planes are
found and traced on the images. An epipolar plane intersects the fiducial centre of
14 | P a g e
the image and the software also locates on the intersecting plane reference points
(Stojakovic, 2008). This can be completed for any points that appear on more than
one image for the entire model. Through this process, known as a bundle adjustment,
the 3D model is created on some arbitrary coordinate system.
Figure 9: Epipolar plane (Stojakovic, 2008)
It is theorised that the precision of calibrated images may be increased by
considering right angles and locating points in the object space on these right angled
surfaces (Stojakovic, 2008).
2.3.1 Structure from Motion
SfM is a method of creating realistic three dimensional models by estimating the
object space and geometry and calibrated internal camera specifications. This is the
same principal as stereo photogrammetry however it uses a dataset usually with large
redundancies and overlapping images from a variety of locations and angles around
the target object. The software can be either fully automated or partially automated
requiring user input. In order to create a 3D model the software requires at least three
images and typically follows a process with the following steps.
 Image acquisition and key point extraction
 Creation of 3D geometry in point cloud
 GCP location input
 Aligning images
 Mesh generation
 Orientation and translation onto a geo-referenced coordinate system
(optional)
15 | P a g e
Figure 10: SfM bundle adjustment geometry (Geodetic Systems Inc., 2014)
The reconstruction of an object/scene is created by using the first pair of stereo
images to create a model. SfM uses the algorithm to identify pairs of images, and
rotations/scales of the images, in relation to the known camera and lens parameters.
Once this has been completed the entire image dataset is included in to the algorithm
which adds detail and structure to the model. The point cloud is then generated in the
following steps and the positions of the camera stations and camera parameters are
resolved.
In the early 1980’s Hugh Christopher Longuet-Higgins discovered if there were
enough similar points observed between a pair of images, the camera position and
orientations could be determined using a set of simultaneous linear equations, and a
least squares solution. Due to the number of points required being 8 (4 similar points
in two images) this was known as the eight-point algorithm. This method was
ground-breaking at the time of its inception. It allowed for researches to study,
enhance and expand on Longuet-Higgins two-view reconstruction to 3, 4 and n-view
algorithms (Hartley, 2004).
SfM models are based on the n-view reconstruction algorithms and can vary from
software to software. Depending on certain aspects of the algorithm, they are more
suited to certain types of models and photogrammetric reconstructions. The
algorithms all stem from the simple epipolar type lines between images that were
used in stereo-photogrammetry. In Hartley & Zisserman’s book ‘Multiple View
Geometry in Computer Vision’ they detail a few of the various methods of
reconstruction in great detail.
If the distance from the camera station to the scene is large relative to the depth
within the scene there is an effective method to compute the geometry of the scene.
To reconstruct a scene from n-views, a simplified camera model known as the affine
camera is used to approximate the perspective projection. If a set of points are
16 | P a g e
visible in a set of n-views involving the affine camera, an algorithm known as the
factorisation algorithm can be used to compute the geometry of the scene and the
specific camera models in one step (Hartley, 2004). The downfall of this model is
that all points must be visible in all views which for most aerial photography is not
economical.
The process known as Bundle Adjustment was developed and is now the dominant
methodology for 3D scene reconstruction for n-view models. The bundle adjustment
procedure attempts to fit a non-linear model to the images and points. The benefit of
this method is that it is a simplified and generalised procedure that has the ability to
be applied to a wide variety of problems. Due to the iterative nature of the bundle
adjustment however it is not guaranteed to converge on the optimal solution from an
arbitrary starting point. In order to rectify this problem however, most SfM packages
use an initialization step before the bundle adjustment to compute a ‘best guess’
starting point for the algorithm (Hartley, 2004). Depending on how well the
initialisation process is completed will determine the speed of the iteration of the
bundle adjustment however with computer power ever increasing; these processes
are becoming quicker regardless of the starting point.
An example of the output from a SfM package is shown in Figure 6 Figure 11 . It
shows Dig Tsho moraine-dam complex in the Khumbu Himal, Nepal captured using
camera stations located around the site. 1649 images and 35 GCP’s were used to
create the reference frame of the 3D model. The dense reconstruction of the model
created a point cloud of 13.2 x 106
points. Processing took 22 hrs. The data was geo-
referenced using GPS information collected (M.J. Westboy, J. Brasington, N.F
Glasser, J. Hambrey, J.M. Reynolds, 2012).
Figure 11: Dig Tsho SfM data products Oblique view showing per-cell (1 m2) point densities. Data
transformed to UTM Zone 45N geographic coordinate system.
17 | P a g e
2.3.2 Interior Camera Orientation
All camera lens systems have imperfections which to the naked eye would not be
noticed. When dealing with very fine detail in photogrammetry and angles between
epipolar lines it is important to note how these imperfections have an effect on the
direction of the rays through the lens. Karara (1998) said “Interior orientation is the
term used to describe the parameters which model the passage of light rays through
the lens and onto the image plane”.
The Camera calibration provides the transformations between an image point and the
light ray-in what is referred to as the Euclidean 3 space-as a value ‘k’. (Zisserman,
1999). Euclidean 3-space is simply a representation of the three dimensional
coordinate spaces in which the camera parameters may be observed. The ‘k’ term is
able to be introduced as a subject in the polynomial series-shown in equation (1) in
order to determine the radial distortion of a lens.
(1)
Karara (1998) States that “For most lenses three coefficients [k1, k2, k3] are sufficient
to describe the distortion curve completely, but for exceptional lenses such as the
‘fish-eye’ up to five coefficients may be required.” The document (Zisserman, 1999)
describes in detail how the matrices and algorithms of lens calibration are derived as
well as research completed by Balletti, et al., (2014) where comparisons between
various calibration techniques was investigated, however it is not essential to know
for the subject of this thesis. It is sufficient to understand how the lens distortion
parameters must be known and defined in order to reduce errors in measurement and
achieve the most effective and accurate results for a photogrammetric survey.
2.4 UAV’s and Aerial Photogrammetry
Unmanned Flight system technologies have seen exponential growth over the past
decade, resulting in highly autonomous and accessible machines capable of a large
range of functions. Figure 12 shows the relationship between object area size and
accuracy using UAV’s compared to other forms of photogrammetry in 2006.
18 | P a g e
Figure 12: (Eisenbeis, 2009)
This figure can be altered now due to the progress UAV technology has made over
the past several years. Advancements in accuracy and flight navigation software
have made the possibility of using UAV systems in photogrammetry, an economical
and viable option for a number of applications. Real time positioning using GNSS
receivers mounted to the body of a UAV increases the positioning accuracy of the
camera station at the time an image is captured. This increases the overall accuracy
of the geo-referenced model.
2.4.1 Positioning and Camera Attitude
Photogrammetry and SfM packages currently work with control points that have
been located and coordinated before the model is created in order to scale and geo-
reference the data. From the known positions of GCP’s and the interior calibration
parameters of the lens and camera, the position and orientation of the image stations
are determined.
An Idea was proposed by German, Friedrich Ackerman in 1982 in his paper
‘Utilisation of Navigation Data for Aerial Triangulation’. He proposed that if one
could accurately measure camera orientation data in conjunction with GPS
19 | P a g e
navigation recorded during a flight, aerial triangulation techniques could become
obsolete.
“It was also demonstrated that the utilization of such auxiliary data in block
adjustment is highly effective…further research is required and encouraged into
GPS and Inertial navigation systems…” (Ackerman, 1984, p. 7).
Ackerman undertook an experiment in 1982 named Bodensee to prove his theory of
the importance of this auxiliary positional data. The test included five flight strips
over Lake of Constance covering an area of 480km2
. Beacons around the site gave a
reference to the aircraft for navigational data which was then post processed to give
x-y coordinated of the camera stations for each image. The overall trilateration
adjustment was found to have an average precision in the order of 1m.
Ackerman was impressed with the result and could see that the only limitations of
his theory would be in the auxiliary data.
Figure 13: Professor Friedrich Ackerman (Ganjanakhundee, 2013)
“..ground control points can only be deleted completely when constant or
systematic errors of the auxiliary data are negligible or are calibrated otherwise, the
example demonstrates convincingly the effectiveness of auxiliary positioning data
and the success of joint block adjustment.” (Ackerman, 1984, p. 7).
20 | P a g e
2.4.2 Measuring Auxiliary data
The concept of using the auxiliary data to negate the need for GCP’s has continued
to be an interesting area for experimentation since Ackerman’s Bodensee project.
The formulae that photogrammetric software is based on indicate that it can be
achieved mathematically to determine an objects size, shape and location without
using the points on the object itself.
The Auxiliary data consists of two parameters containing 6 unknowns for each
camera position.
 GPS position (X, Y, Z)
 Camera attitude ( pitch, yaw and roll)
In order to achieve the best possible adjustment, the two parameters need to be
measured accurately and at very specific times. For example, in Bodensee,
Ackerman and his staff were able to measure GPS position with a variance in
coordinate of 2.2m (Ackerman, 1984, p. 6). This resulted in a 1m overall precision
of coordinated points. If a measurement is not recorded at the exact time an image is
taken, the errors can be substantial as the plane or UAV is traveling at speed. If a
UAV was flying at 5ms-1
and the time of GPS coordinate was 1/10th
of a second
after the image was taken, the camera station would be out by 0.5m. Image timing is
discussed in chapter 3.4 of this paper.
In the mathematical model of a bundle adjustment the collinearity equations are:
(2)
(3)
(G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013)
Where:
ξ & η Image coordinates
ξ0, c & η0 Interior orientation parameters of the camera
x, y, z Ground coordinates of a point
x0, y0, z0 Perspective centre coordinates
21 | P a g e
rij Attitude matrix (rotation matrix from object system to camera system with
elements rij) shown in chapter 2.1.
It is important to note that when the GPS position is recorded, the antenna phase
centre is not located in the same position as the image centre. An observation
equation needs to be involved in the adjustment in order to relate the camera position
to the GPS antennae phase centre (G.Forlani, L. Pinto, R. Roncella, D. Pagliari,
2013, p. 4).
(4)
Where:
xa The positions of the GPS antenna phase centre at time of exposure in a
cartesian/local coordinate system.
x0 The coordinates of the lens sensor
e The offset of the GPS in the image space (fixed)
(5)
R Is (as above) the attitude/rotational matrix at time of exposure t
It is important that the vector ‘e’ is determined via a calibration and that it stays
constant. This way the only unknown in the equation is the coordinate of the image
centre (perspective centre). The unknown is easily solved for during the bundle
adjustment.
Integration of a useable system for terrestrial photogrammetry has been shown to be
quite accurate without using ground control points. The article ‘Terrestrial
photogrammetry without ground control points’ examines a device where a camera
was mounted on a pole with GPS receiver on top. The eccentricity vector was
calculated accurately using a total station and was assumed as constant with respect
to the rotations of the camera (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013, p.
6).
22 | P a g e
Figure 14: Prototype of GPS receiver integrated with camera (G.Forlani, L. Pinto, R. Roncella, D.
Pagliari, 2013)
Table 1: Accuracy at tie points and check points (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013)
In 1996 a paper was published by the University of Stuttgart along with the
University of Calgary titled ‘Exterior Orientation by Direct Measurement of Camera
Position and Attitude’ which has an in depth look into how the measurement of
axillary data is affected and how it affects the block adjustment and accuracy of a
model. The investigation involved airborne data acquisition and a GPS/INS system
that was designed for the purpose of recording this data.
Their system into account the GPS and camera mis-orientations with the INS system
as well as the displacement vectors related the camera imaging centre. The equations
in the form of a matrix calculation took the form:
The study and application found that the
GPS and attitude data has a very strong
theoretical application to improve the
reliability and accuracy of a bundle
adjustment; however they believed that
there still needs to be greater reliability of
GPS positioning data. Table 1 shows the
accuracies they were able to achieve using
photogrammetry on a variety of objects.
23 | P a g e
(6)
(J.Skaloud, M.Cramer, K.P.Schwarz, 1996, p. 127)
Where:
(Xp, Yp, Zp) and xp
r
, yp
r
are the point coordinates in a geodetic reference system and
the reduced image coordinates in the photograph respectively.
(X0, Y0, Z0) are the coordinates of the camera perspective sensor in the reference
frame.
a is a point dependent scale factor
f is the lens focal length
Rb
m
(ω, φ, κ) is a 3D transformation matrix which rotates the camera frame into the
GPS reference frame.
dRp
b
= f(δ1, δ2, δ3) is the constant mis-orientation vector that is between the INS
system and the imaging sensor. The solution to this can be obtained by using
an in-flight calibration. The major assumption is that the imaging sensor,
GPS antenna and the INS system stay fixed in their orientation and relative
position. In a UAV where the camera is able to move around freely with the
gimbal, any INS sensors would need to be located on the gimbal to determine
the orientations at each point in time. This also changes the relative
orientation between the GPS antenna and the INS system which would need
to be derived for each exposure time.
In order to solve equation (6) the GPS/INS derived positions and attitude need to be
defined along with the sensor calibration (interior orientation), allowing all terms on
the right hand side of the equation to be known. This enable the image point
coordinates to be represented in the object space and georeferenced. This is how the
auxiliary data achieves a georeferenced model without the need for ground control
points. The difficulty lies in how to accurately determine each of the terms on the
right hand side of the equation and at exactly the same time as each exposure is
taken.
The investigation of implementing this system called KINGSPAD (KINematic
Geodetic System for Position and Attitude Determination) shown below in Figure 15
was undertaken in 1995 as mentioned in the above journal article (J.Skaloud,
M.Cramer, K.P.Schwarz, 1996).
24 | P a g e
Figure 15: KINGSPAD layout (J.Skaloud, M.Cramer, K.P.Schwarz, 1996)
The results showed that there were large discrepancies in GPS position as there were
only 4-5 satellites. The errors in selected coordinates of ground control had a
standard deviation of 0.3m horizontally and 0.5m vertically flying at a height of
900m. The standard deviation of the GPS and attitude measurements are shown in
Table 2. Again the results were positive and the main limiting factor was the
acquisition of derived GPS/INS positions at the right times. Based on the 1996
paper, if the experiment were to be reproduced with a similar system with present
technology, the GPS data should be far more reliable and increase the accuracy
substantially.
Table 2: Measurement std dev. of GPS/INS system (Wade, 2014)
Camera Parameter σ of measuring errors Control point σ of errors
Pitch (ω) 1’48” -
Yaw (ϕ) 36” -
Roll (к) 1’48” -
Easting (X) 0.15m 0.3m
Northing (Y) 0.15m 0.3m
Height (Z) 0.2m 0.5m
25 | P a g e
3 Flight Plan Optimisation
The use of UAV systems in surveys has been proven to be cost effective and with the
improvement of the previously mentioned GPS/INS integrated systems, the
accuracies of the produced models will be highly beneficial in a variety of
applications. This thesis sought to identify a means of planning a UAV flight in
order to survey specified areas and give detailed analysis of how to optimise the
survey and achieve required accuracies. With the expansion of the UAV market into
commercial and also personal use, there are a great deal of customised packages and
options to suit different budgets, applications and experience levels. This paper
investigates flight planning using multi rotor UAV systems which can range from a
$300 to over $100,000. The difference between systems of different price brackets
can be from battery time, range or accuracy of an in-flight system or software.
Research was based on the Vulcan Hexacopter (1.08m diameter carbon fibre frame)
which was purchased by the University of Newcastle, NSW, Australia for
approximately $15,000 AUD. The performance of the Vulcan Hexacopter provides
accurate output for a mid-range device.
After relationships were found between the variable aspects of a UAV survey, a
simple program, WADE_flight2014, was developed to allow a user to select an area
using a widely available mapping program such as GoogleTM
earth. Selecting a
camera system to attach to the hexacopter, selecting the lens (fixed focal length is
recommended for best results) and selecting a minimum flying height and timing of
exposures. The program provides a number of options for flight plans as shown in
Figure 16 allowing the user to select the appropriate option according to their
specific restrains/requirements. The user guide for Wade_flight2014 can be viewed
in Appendix A.
26 | P a g e
Figure 16: Flight Optimisation for UAV surveys (Wade, 2014)
3.1 Ground/Object Coverage
It is important to know how the internal workings of a camera affect its ability to
cover the object that will be surveyed. Simple trigonometry and similar triangles are
best used when describing ground sampling distance GSD.
When an image is exposed and the photograph is taken, the internal sensor of the
camera gathers all of the light information that is let in through the lens and spreads
the light rays out onto the sensor plane effectively reversing and condensing
whatever was in the frame of the image onto a small sensor made up of millions of
little pixels. Figure 17 shows where the sensor is located within a digital single-lens
reflex (DSLR) camera
Figure 17: Camera showing Sensor (Digital Photography Review, 1998-2014)
Camera Sensor
27 | P a g e
The similar triangles come into consideration when the focal length is known. The
focal length is the distance between the optical centre of the lens and the principal
point/focal point where the light hits the sensor. It is given in details on each lens as
specified by manufacturer. In order to fine tune and find the exact focal length for a
particular lens and camera pairing, an internal calibration is completed. The
calibrated focal length is the adjusted focal length after the radial lens distortion has
been averaged.
Figure 18: Focal length and Principal point (ExposureGuide.com, 2014)
Once the calibrated focal length is found, the distance to the object/ground is related
as shown in Figure 20. The benefit of using a GPS controlled UAV system is that the
distance to the object can be somewhat defined in the flight plan and therefore this
becomes a “known” factor in the similar triangle equations. The other important
characteristic of the camera equipment is the size of the sensor. This paper is focused
on full frame sensors which are the professional and more expensive DSLR units.
The approximate starting price for a full frame camera begins at $2000 AUD. A crop
factor needs to be applied to the focal length before any calculations can be done for
GSD if a smaller sensor is used. Comparisons between sensors can be seen in Figure
19 below. Full frame sensors in a DSLR camera have a variety of advantages and
also some disadvantages for use in photography. One disadvantage is the weight
comparison to their smaller sensor counterparts. An advantage of a full frame sensor
is due to being larger, the pixel size is increased. This enables more light to be
captured by each pixel, allowing greater amount of light to be captured before the
photodiode is oversaturated. Less noise is also present from neighbouring pixels.
These attributes conclude in a higher quality image at differing light and contrast
28 | P a g e
situations which is helpful for the type of surveys that may be undertaken with a
UAV. These attributes are also touched on later in chapter 3.5.1.
Figure 19: Comparisons between sensor sizes (GPS photography.com, 2012)
The full frame sensor is 36mm x 24mm and will have a different amount of pixels
depending on camera manufacturer and model. The formula for working out how
much GSD will be covered in each image is shown below.
(7)
Where:
D (m) is the distance to the object/ground from the camera
Ssx (m) is the sensor size in the x direction (i.e. for full frame Ssx is 0.036m)
Ssy (m) is the sensor size in the y direction (i.e. for full frame Ssy is 0.024m)
f (m) is the focal length (calibrated) for the lens.
29 | P a g e
Figure 20: Ground Sampling Distance (Digital Photography Review, 1998-2014)
For greater object coverage at the same distance from object, a smaller focal length
lens can be used. For example, if using a Nikon D800 Camera with a 20mm fixed
focal length lens at a distance of 20m from the ground, GSD = 864m2
. If the lens
were changed to a 24mm fixed focal length at the same distance of 20m the GSD is
only 600m2
. These differences can play a major role in determining which lens and
camera is best to use for a particular survey as a shorter focal length will allow for
greater object coverage meaning less photographs and less flying time. It would be
recommended however, that the focal length should stay above 15mm as there can
be greater lens distortions with wider angle (short focal length) lenses.
3.2 Number of Photographs for UAV Survey
The next factor of a UAV survey is how many images need to be taken for a
specified area. The image count will greatly influence the quality of the resulting 3D
model when using SfM software packages. As mentioned in Invention of Aerial
Photogrammetry 2.2.1 there needs to be an overlap between each consecutive image
in a flight line. Due to the nature of the mathematical model used, it is important to
maintain a minimum overlap so that the resulting 3D model is complete and does not
have ‘holes’ within it. This paper will look at overlap in forward and side
overlapping images as this is where the research has been completed for the thesis.
Upon late research however an article was discovered ‘Flight Planning and
Orthophotos; Leaning Instead of Overlap’ (Raizman, 2012). This article describes a
method of using a camera and lens field of view angle and building leaning within
30 | P a g e
images to determine overlap parameters as a more accurate method for flight
planning.
When determining overlap of images it is important to consider reasons as to why a
particular overlap is required. Overlap is generally expressed in a percentage.
Traditionally, aerial photography used a 60% overlap between images along a strip
and 25% overlap of flight lines. These allowed for every point on the ground to be
captured 2.5 times. Film was expensive and image capturing was slow. With today’s
technology and systems, capturing images has very little cost factor on a
photogrammetric survey and therefore we can afford to take extra images to allow
for redundant data. This thesis follows a principal of 80% overlap for images within
a strip and 60% overlap between flight lines. This will capture each point on the
ground/object 5 times and provide greater redundancy than the 60% / 30%. Using the
80% overlap it would take 3 bad or unusable images in a row to create a hole in the
resulting model (AdamTechnology, 2012).
Figure 21: Image overlap along a strip (AdamTechnology, 2012)
Due to having redundant data with the 80% overlap, if all of the images from the
survey are useable, every second image may be removed to speed up processing time
and the model will be unaffected (Michael Gruber, Roland Perko, Matin Ponticelli,
2012). Of course, the greater overlap that is used, the higher quality the model will
be at the expense of processing time and flight time. Image overlap will also have an
effect on the expected accuracies of a model which is discussed in chapter 3.2.
Once an overlap is determined, the number of images required to cover an
object/area becomes just a function of the GSD and the required survey area.
(8)
Where:
(9)
31 | P a g e
(10)
This calculation is best described by an example shown below relating to Figure 22
Figure 22: Example for Image overlap (Wade, 2014)
In the above image a football field is shown. This field has the parameters of 100m
in length and 60m in width corresponding to the Dx and Dy respectively from
equation (8). If the Nikon D800 camera with a 20mm lens is used, flying at a height
of 20m above the field, equations (9) & (10) become:
Δx = 7.2m, Δy = 9.6m
This means that each image along the strip will require the perspective centre to be
7.2m apart and the flight lines will have a 9.6m distance between them.
This means that the survey will require 13.8 images to cover the length of the field
with 6.25 strips of images to cover the width of the field. Due to the numbers not
being whole it is important that these numbers are rounded up to the next whole
number otherwise the overlap will be sacrificed.
The result to equation (8) will then be the product of images per strip and number of
strips. In the case of the football field this will be 98 images. The flying height will
change the number of images drastically but will also have an effect on the
achievable accuracy discussed in chapter 3.3. For example if the flying height was
increased by 10m to 30m above the field equation (8) gives 50 images for total
coverage.
Conversely if the lens were changed to a longer fixed focal length, say 24mm, the
survey would require 136 images at a height of 20m to achieve the desired overlap.
Dx
Dy
32 | P a g e
3.3 Pixel Size and Accuracies
When a photogrammetric survey produces data, it can be in the form of a three
dimensional model/point cloud or orthomosaic photo. An important feature of the
produced models or images is the point density and also ground pixel size (Gp). If
measurements are to be taken from the model or features need to be determined, Gp
is critical. For the archaeological site mentioned in chapter 1.2, it was important to
have a very fine detail of pixel quality and size in order to determine specific
structural details of the site to give key information to the archaeologists. The error
in measurement comes when zooming into an object within the image. If using
GCP’s, the placement of these will be effected by Gp and therefore the accuracy of
the model also.
Ground pixel size is a function of the distance from the target Dtarget, focal length of
lens, megapixel (MP) value of the camera and the sensor size. The relationship
between Gp and distance from object/ground is a linear one and therefore the ground
pixel size will increase the further you move the camera away from the subject.
Figure 23 shows the linear relationship between Gp and Dtarget using a full framed
Canon EOS 6D DSLR camera with a 28mm lens.
Figure 23: Ground pixel size vs Dtarget (Wade, 2014)
The formula for determining the size of the Ground pixel is shown in equation (11).
(11)
Where:
f is the focal length of the lens
Ssx is the sensor size in the x direction (i.e. 36mm for a full frame sensor)
pixelsx is the number of pixels across the sensor in the x direction
0.00
5.00
10.00
15.00
20.00
25.00
30.00
35.00
40.00
0 10 20 30 40 50 60 70 80 90 100110120130140150160
Ground pixel size
(mm)
Dtarget (m)
Canon EOS 6D
Pixel size
Ground
28mm lens
33 | P a g e
The pixelsx value is usually given in the product specifications by the manufacturer.
Even though the side of the sensor in the ‘y’ direction is only 24mm the pixel count
is also less and the ratio between the two works out the same as the ratio of the x
direction, this is due to the pixels being square. Pixels are in the order of µm’s. The
Canon EOS 6D has pixels of 6.5µm whereas the Nikon D800 has pixels of 4.8µm.
In the above example the Canon EOS 6D shown in Figure 23, if a Dtarget of 20m is
selected, the Gp will be equal to 4.7mm. The longer focal length in this case lowers
the size of the Gp as it is effectively ‘zooming’ in closer to the object whilst losing
GSD coverage. By changing the lens to a 24mm focal length, the GSD increases
providing greater coverage and requiring less images to complete a survey, however
the pixel size increases to 5.5mm for the same Dtarget. This seems a small difference,
however at large scales and for a larger Dtarget the difference becomes noticeable if
resolution is of high importance in a model.
3.3.1 Planimetric Accuracy
As with all surveying practices and photogrammetric surveys, the importance of
determining expected accuracies is highly integrated into the planning procedure.
From a theoretical point of view this is achieved through manipulation of the
available specifications related to the camera, lens and object. Before completing a
survey it is suggested that time is taken to define the requirements of accuracy given
by the client or indeed advise the client of the costs associated in respect to fulfilling
the condition of accuracy specified. The program developed along with this thesis
gives a range of expected accuracies shown as a standard deviation in both
planimetric (planar x-y or Easting-Northing) and height (depth).
Before calculating the accuracy of a survey, it is helpful to understand the
mathematical reasoning. When two or more images are taken in sequence, their rays
of sight will intersect at varying degrees creating an error ellipse for each point on
the ground that is intersected. Figure 24 shows a typical error ellipse that is created
for a point.
34 | P a g e
Figure 24: Error ellipse (AdamTechnology, 2012)
To determine the planimetric accuracy the distance along the error ellipse in the
plane cutting the error ellipse at right angles to the view direction is found and is
given by equation (12). The accuracy of the pixel in the image sensor is determined
by the quality of the image and cannot easily be specified as it involves variables like
noise and blur and the accuracy of the camera calibration. A safe method for
determining planimetric accuracy is to give the pixel error as 0.5 pixels as this is a
safe estimate and with some basic knowledge of photography this will be easily
improved to below 0.3 pixels. Further detail about pixel error is discussed in chapter
3.5. As with any expected error when dealing with a client it is important to be
conservative in the estimate so as to leave room for error. As the pixel error can be
seen as a constant 0.5 pixels the equation takes the form of equation (12).
(12)
As with the Gp there is a linear relationship between planimetric accuracy and Dtarget.
Figure 25: Gp, Planimetric accuracy and Height accuracy vs Dtarget shows how pixel
size, planimetric accuracy and height accuracy are related to each other and Dtarget.
As Gp is affected by focal length and distance to object so too is the planimetric
error in ratio to the pixel accuracy.
Figure 25: Gp, Planimetric accuracy and Height accuracy vs Dtarget (Wade, 2014)
Planimetric Accuracy
35 | P a g e
3.3.2 Height Accuracy
Most photogrammetry applications are more concerned with height accuracy or error
as this is the most difficult and sensitive parameter to refine. As images are taken
from a distance to an object and in a 2 dimensional form, the ability to determine
accurate height information from a bundle adjustment always seems to be the most
difficult portion of the survey. As the semi major axis of the error ellipse in Figure
24 the height accuracy is affected by the following factors. Dtarget(m), Δx(m) and
planimetric accuracy. Δx as defined in equation (9) is also a factor of desired
overlap, sensor size and focal length. It is the distance between perspective centres of
the camera stations along a strip of images. By manipulating the desired overlap and
increasing the Δx (sometimes referred to as the ‘base’ distance) the error ellipse will
become more circular and as a result, the height accuracy will improve. Equation
(13) gives the expected height accuracy.
(13)
It was discovered , by changing the focal length in the hope that the base distance Δx
would alter enough to improve the height accuracy something unexpected occurred.
Experimenting with the program a Sony SLT-A99 camera was selected with a 35mm
focal length lens (found as a compatible lens on Sony’s website). The Height
accuracy at a distance Dtarget of 50m was 21mm and planimetric accuracy of
4.3mm.Changing the lens to a lower 20mm lens increased the base distance Δx from
10.3m to 18m and increased the planimetric accuracy as expected to 7.5mm.
However it was interesting to note that due to the relationship of focal length, pixel
size and base distance, the height accuracy changed by 0.1mm which is negligible.
What this enables is that as long as the planimetric accuracy stays within the
requirements the lens can be altered to a lower focal length allowing far less images
to be captured to cover the same area while still maintaining the same height
accuracy. Figure 26 shows the changes for planimetric, height and pixel size. Figure
27 shows the difference in images required for the two different lenses on a sample
area of 300m by 100m flying at 50m above the ground.
36 | P a g e
Figure 26: Comparison of lenses Sony SLT-A99 (Wade, 2014)
Figure 27: Number of images for different lenses (Wade, 2014)
The benefit of fewer images can be a major factor in determining which lens to use
for a particular survey if the choice was available. The other factors would also have
to be weighed up however such as model quality due to the lower planimetric
accuracy and pixel size. The only way to improve height accuracy is to either
decrease the flying height of the UAV (meaning more images need to be taken) or to
reduce the overlap of images which may risk losing redundant data and a poor
model. Due to this, where the lower focal length lens reduces the number of images
0
5
10
15
20
25
Planimetric Accuracy Height Accuracy Pixel Size
millimetres
Sony SLT-A99
35mm lens
20mm lens
0
50
100
150
200
250
300
350
Images required for sample area
Numberofimages
Sony SLT-A99
35mm lens
20mm lens
37 | P a g e
required, the option to reduce the flying height such that the number of images
required is the same-or similar-to that of the 35mm lens.
At 50m above the ground the 35mm lens required 300 images on the sample area. By
reducing the flying height to 30m with the 20mm lens, only 245 images are required
and the height accuracy has improved to 12.5mm, planimetric accuracy is 4.5mm
which is the same as for the 35mm lens at 50m Dtarget. Figure 28 and Figure 29 show
the comparison between the 35mm focal length and the 20mm focal length on the
Sony SLT-A99 camera.
Figure 28: Sony SLT-A99 35mm focal length (WADE_flight2014)
Figure 29: Sony SLT-A99 20mm focal length (WADE_flight2014)
The program user is able to view the options and select the flight plan that is of
greatest benefit for their particular survey and accuracy requirements.
38 | P a g e
3.4 Estimating Total UAV Survey Time
When planning to undertake a photogrammetric survey using a UAV drone it is
important to consider the time it will take to complete it accurately and thoroughly so
as to avoid returning to the field to salvage missed data. Time constraints can be very
critical to how much area each flight can survey given a set requirement for accuracy
as mentioned in the previous chapter. In the testing that was completed using the
Vulcan Hexacopter drone at the University of Newcastle campus it was found that
the specified flight time of 20-25 minutes by the manufacturer was the capability of
the drone carrying zero payload. This is drastically diminished when the battery pack
and camera/lens system is attached. It was estimated that the drone could sustain full
flight for around 10-11 minutes with a relatively moderate weight DSLR camera
(Canon 100D weight approx. 650g with lens). The testing of this drone was limited
and therefore the actual fight times for a survey were not recorded during the writing
of this thesis. It was however, determined that flight time of a survey is a major
restriction when planning and needs to be closely investigated.
The factors affecting fight time of a survey are number of images, timing of images
and flight speed as a result. Once the number of images is determined for varying
Dtarget values, it is then dependent on how often ach image will be taken as to how
fast or slow the UAV will fly. Most late model DSLR cameras have the ability to
take images at set time intervals (epochs) either through the hardware or an external
software like an integrated flight system. The relationship between the timing of the
image epochs and the flight speed is simply a function of the base distance Δx as
defined in equation (9) . The following formula shows the relationship.
(14)
The program WADE_flight2014 allows the user to specify the image epochs
depending on their particular hardware specifications. It will then produce the
Maximum flight speed in ms-1
that will be required to sustain the overlaps and
accuracies as determined in the previous chapters.
As long as the UAV flies below the calculated speed, the overlap between image
pairs and flight lines will be at least the required amount (80%). Most flight systems
on UAV hexacopters that have been investigated have the ability to define a flight
speed between waypoints. See Figure 30 for a screenshot of software DJI Ground
Station 4.0 that came as a package with the A2 flight controlled for the purchased
39 | P a g e
Vulcan Hexacopter. No testing has been completed yet as to how accurate this speed
is or indeed how wind affects the UAV speed.
Figure 30: Dji Ground Station 4.0 Waypoint editing
The option to complete an adaptive bank turn has been selected to the next waypoint
as this waypoint was at the end of a strip and it would turn to begin the next strip
(this saves time in the air as opposed to the other option of ‘stop and turn’).
Once the maximum speed has been determined for a set of waypoints, the total flight
time is calculated by equation (15) (below). *note: Due to the maximum flight speed
being calculated, the total flight time is a minimum time to survey the area with the
specified requirements of accuracy etc.
(15)
Once simplified this becomes equation (16).
(16)
Where:
Dx & Dy are the length and width respectively of the area to be surveyed.
Figure 32 shows how the results are displayed in WADE_flight2014 for the inputs in
Figure 31on the area of the Football field in Figure 22 on page 31.
40 | P a g e
Figure 31: Input parameters for WADE_flight2014
Figure 32: WADE_flight2014 outputs for survey of football field using Canon EOS 6D
As can be seen in Figure 32, there is a substantial difference in total flight time
between close Dtargets and further Dtargets. If a sacrifice of 8mm in pixel size and
16mm in height accuracy is made, the survey could take around 27 minutes less time
to complete.
To compare cameras of similar specifications Figure 33 was created using Microsoft
Excel. It shows how cameras with different specifications-if using the same lens
focal length-the survey takes the same time to complete. The trade-off comes in the
form of accuracy and pixel size/quality.
41 | P a g e
Figure 33: Camera Comparisons (Wade, 2014)
If flight time needs to be decreased it is recommended to use a shorter focal length
lens as can be seen in Figure 33 with the Sony camera using a 20mm focal length.
Using a shorter focal length lens on the same camera will not affect the depth
accuracy of the model however it will slightly increase the pixel size resulting in a
slightly worse resolution while decreasing the total flight time. The user needs to
determine which of the requirements is the most demanding for the survey/client.
3.5 Camera Settings
This chapter explores the internal mechanisms of the camera system, how certain
camera settings can be understood and optimised in order to achieve the best results
and how to improve the pixel accuracy to determine accuracies within the model as
mentioned in chapter 3.3. Most people with a slight understanding of technology are
able to point and shoot a camera and obtain an image of an area or object. It takes an
understanding of how and why camera settings are altered to achieve the best
possible image quality for a given set of circumstances. Photogrammetry is
concerned with accuracy and quality data, it is reasonable to assume that a survey
needs to have the most accurate and functional images to create the resulting model
or output.
0
1
2
3
4
5
6
7
8
9
10
Canon 6D
*24mm lens
Nikon D800
*24mm lens
Sony SLT A-99
*20mm lens
Millimetres/minutes
Camera Comparisons Flight time (minutes)
Height accuracy (mm)
Pixel size (mm)
*flight time (blue)
is shown in
minutes.
Values are shown
for football field
area (100m x
60m) with
Dtarget = 20m
42 | P a g e
The following three main functionalities of exposure need to be optimised to create
quality images;
 Shutter speed
 Aperture
 ISO sensitivity
Figure 34: Camera Settings (Digital Camera World, 2012)
These three settings work together in unison and usually by altering one, the others
need to be adjusted also.
3.5.1 Camera Sensors
Digital cameras record exposures via their image sensor which can be one of two
types. Charge Coupled Device (CCD) sensors and Complementary metal Oxide
(CMOS) sensors.
43 | P a g e
Figure 35: CCD (left) and CMOS (right) image sensors
Both sensors work by converting light into an electric charge and processing it into
electric signals. Both were developed around the same time in the late 1960’s-
1970’s.with CMOS being slightly younger (Herd, 2013). The production methods of
CMOS sensors are much cheaper and simpler than CCD sensors which have assisted
in decreasing the pricing of cameras (Litwiller, 2001).
3.5.1.1 CCD
In a CCD sensor every pixel’s charge is transferred through a very limited number of
output nodes to be converted to voltage, buffered and sent out of the pixel as an
analogue signal. This means that the entire pixel can be devoted to capturing light
and the output is highly uniform resulting in a high image quality and less noise
within the images. Because of the processes involved, these sensors use much more
energy than their CMOS counterparts. The technology in CCD sensors was designed
specifically for cameras as opposed to CMOS sensor technology which is also used
in other microchips (Litwiller, 2001). CCD sensors are generally a lot larger and
more suited to high end imaging applications.
3.5.1.2 CMOS
The CMOS sensor was designed as a less power hungry alternative to CMOS with a
variation in the way pixels handle information. In a CMOS sensor each pixel has its
own charge-to-voltage conversions. The make-up of CMOS sensors has an effect on
the light capturing ability of each pixel as much of the surrounding area is in use for
processing (Litwiller, 2001). The non-uniformity of pixel processing can lead to a
noisier image which is not desired for photogrammetry. The low power consumption
44 | P a g e
and the fast processing speed however, do make this sensor useful for
photogrammetric applications from a UAV. The majority of high end professional
cameras now use CMOS sensors with very little differences to be found between
image qualities of the two types. (Herd, 2013) Other functionalities and precautions
are now used to reduce noise within images (some of these are discussed in the
following chapters).
3.5.2 Shutter Speed
The first setting that is influenced the most by the flight planning portion of a survey
is the lens shutter speed. The camera shutter in a DSLR is located directly in front of
the image sensor as shown in Figure 36: Canon 5D Internal mechanisms.
Figure 36: Canon 5D Internal mechanisms (2000-2013 Little Guy Media, 2002)
The shutter speed determines how long the image sensor is exposed to the light that
is entering through the lens. In modern equipment this can range anywhere from
several seconds to 1/16000 seconds. A correctly exposed image produces the best
result and balance of natural light for the environment. The reason this is important
when undertaking an aerial survey with a UAV is because there is movement
involved. Fast shutter speeds are best to freeze a moving object to make it appear
still and without blur. If the consideration is made that the UAV/camera is fixed in
its space then the object/ground will be the moving target. Depending on how fast
Image Sensor
45 | P a g e
the target is moving, depends on how fast the shutter needs to close to avoid a blurry
image.
As a result of an effectively developed flight plan, the maximum speed of the UAV
is known. To capture an image that is sharp, it is recommended that in the time it
takes the image to be exposed, the object does not move more than 0.5-1 pixel (Gp).
In WADE_flight2014 each minimum shutter speed is calculated depending on the
Gp and the maximum flight speed using the following formula.
(17)
If the Gp and flight speed are known for a given Dtarget, focal length and camera
Equation (17) can be simplified to;
(18)
This gives the shutter speed so that the UAV has only moved 0.5pixel by the time
the image has been exposed. This can be modified to half the value if the UAV is
allowed to move 1 pixel during exposure time.
During the development of the program WADE_flight2014 it was discovered that
due to an interesting relationship between the ground pixel size and flight speed at
various flying heights, the shutter speed remained relatively constant as the Dtarget
increased. Figure 37 shows the shutter speeds-as seconds-1
in the last column. These
speeds are calculated for flying heights between 10m and 45m however the speed
remains the same for any flying height (Dtarget). This was unforseen in the
development of the program however it does simplify the planning process as the
camera need only be set to a particular shutter speed once regardless of the height.
Figure 37: WADE_flight2014 output showing all parameters
46 | P a g e
3.5.3 Aperture
Aperture is an important function of a camera to understand in photogrammetry as it
relates to the ‘depth of focus’ within an exposed image. As the importance is to have
a much of the image in focus as possible, it is essential to understand how to
manipulate the camera’s aperture settings to maximise this depth. Aperture or a
camera’s ‘f-stop’ number refers to the opening through which light is allowed before
it reaches the sensor or the shutter.
Figure 38: Camera Aperture (ExposureGuide.com, 2014)
As shown in Figure 38, the opening of the lens varies from small to large depending
on the aperture setting. When light is forced to enter through a small space, the depth
of field is greater than when the light enters through a larger opening. Aperture can
vary from f/1.4 to f/32 with the in between settings (f/2, f/2.8, f/4, f/5.6, f/8, f/11,
f/16, f/22). Consider Figure 39 below; on the left side of the image the aperture
setting is approximately f/22 and on the right side of the image it is f/2.8.
Figure 39: Depth of field (ExposureGuide.com, 2014)
The left side of the image is the best representation of the depth of field that is
required for accurate 3D modelling with photogrammetry as the entire scene appears
sharp and in focus and therefore it is found that a larger number such as f/22 is
required to achieve this. Karara (1998) states, “It is advisable to use a reasonably
small aperture to maintain depth of field and reduce coma, spherical aberration and,
47 | P a g e
to a lesser extent astigmatism.” meaning that smaller aperture will reduce the error
that causes the light rays through the lens to disperse before the image sensor.
When selecting the best aperture for UAV photogrammetry it can be more complex
than setting the f-stop to f/32 or f/16 however. Due to the nature of the setting and
the restriction of light, the image still needs to be correctly exposed to give a quality
image and pixel value. A direct relationship lies between the aperture setting and the
camera’s shutter speed. When the light enters the lens through the aperture opening,
the shutter needs to stay open long enough to allow sufficient light through to the
image sensor. When a smaller opening is set (i.e. f/22) less light is allowed through
the lens and as a result the shutter must stay open longer to allow the sensor to be
exposed. When the aperture number is set to a large opening (i.e. f/2.8), the shutter
must close quickly as the excess light entering the lens can overexpose the image
resulting in a white scene. It is important to understand these relationships as due to
the moving nature of UAV photogrammetry and minimum shutter speeds, certain
aperture settings may not be possible which could affect the depth of field for the
images.
3.5.4 ISO Sensitivity
ISO setting can be manipulated to create a balance between aperture and shutter
speed and maximise the depth of field while still maintaining the minimum required
shutter speed during a UAV photogrammetric survey.
ISO is the measure of light sensitivity of the image sensor. In a DSLR camera
typically the ISO can be set to several settings including 100, 200, 400, 800, 1600
considered the ‘normal’ range. According to Nikon this range can go as low as 50
and as high as 204800 (Nikon USA, 2014).
Figure 40: Changing ISO sensitivity (macrominded.com, 2014)
48 | P a g e
The lower ISO numbers coincide with less sensitivity to light for the image sensor.
Again due to the light required to have a balanced exposure, this may interfere with
the shutter speed. Low sensitivity requires a slower shutter speed to allow sufficient
light into the sensor. The simple answer seems to be to increase the sensitivity, have
a small aperture opening and a fast shutter speed however increasing the ISO also
comes with another complication. Grainy or ‘noisy’ images are often a product of a
high ISO setting. The more sensitive a sensor is to the light, the grainier the image is
that is produced and of lesser pixel quality/accuracy.
Understanding the three principals of exposure will inevitably allow the
photogrammetrist to manipulate the camera settings in order to produce the best
exposures for the lighting available. New technologies and cameras are finding a
balance between the principals of exposure and automating the process substantially
which is making it easier to obtain accurate and balanced images for use in many
applications including photogrammetry. It is important that each of the settings
allows for the scene to be captured with enough detail to permit the 3D models and
orthomosaic photos to be useable and detailed. The UAV flight planning program
WADE_flight2014 assists in determining the shutter speed however the aperture and
ISO settings cannot be calculated using mathematical procedures as the amount of
light in the environment will vary from survey to survey and therefore they must be
adjusted to suit
4 Conclusions and Recommendations
The purpose of this thesis was to study the parameters involved in photogrammetry
and discover the uses for UAV technology in combination with photogrammetry. To
understand the geometric principals that form the algorithms and provide the basis
for 3D scene reconstruction and modelling. This paper was able determine the
accuracies required of measured auxiliary data in order to have accurate
georeferenced scene models without the need for ground control points. Determine
the Camera and image related aspects that have an effect on the overall model
accuracy and seek to optimise these through use of a simple flight planning program
for photogrammetric surveys.
The use of UAV drone technology is apparent and gaining momentum year by year.
The range of application is broad and the achievable accuracies under the right
circumstances are able to rival even laser scanning. The benefits of a
photogrammetric model are far greater than a laser scanned point cloud as for many
purposes, image data is also a useful tool for investigation. The data is quickly and
easily transformed from images to useable models using one of many SfM software
packages available. The greatest benefit seems to be in areas where traditional
survey techniques are not available or sites are unable to have human access or are
too large to provide an economically beneficial alternative. These applications
49 | P a g e
include mining, archaeological sites, environmental landforms such as
glaciers/volcanoes, sacred or sensitive sites and hazardous areas. UAV
photogrammetric surveys have been proven cost effective both in equipment and
time-wise.
The parameters investigated that provide the platform for photogrammetric
modelling algorithms and measurements have been known for centuries. It appears
that as technology advanced, so too did the understanding and application of
mathematical formula which enhanced photogrammetry and its abilities. From the
earliest image capturing techniques to the present various modes of data acquisition
the basic fundamental principles of photogrammetry still play a vital role in how a
scene is reconstructed and the measuring techniques used. The epi-polar lines and
light rays first envisioned by Leonardo Da Vinci are still involved in the most
advanced formulae. Through understanding how and why the photogrammetric
parameters discussed in this thesis are involved, undertaking a photogrammetric
survey one is able to ensure the most accurate results possible.
The required accuracy of measuring auxiliary camera data in the form of the camera
attitude and global position to produce survey quality models from a UAV without
ground control points is still yet to be determined. Much research has been
completed in the area however it seems to still be plagued by the lack of GNSS
reliability. In order to integrate a system that is accurate and light enough for UAV
use still further research will be required. It was hoped that testing would be
completed to determine the absolute accuracy of the GPS positioning using the A2
flight controller system shown in Figure 41
Figure 41: A2 Flight Controller components
Knowing these accuracies will assist in determining how accurate the measured
auxiliary data can be however due to a number of circumstances this testing was not
able to be completed during the completion of this thesis. Similar testing was
completed in 2011 in Zurich Germany (Blaha, et al., 2011) where the flight system
50 | P a g e
Falcon 8 was investigated with inconclusive data showing variations of up to 1.5m
in position. The next major step forward in photogrammetry will be in the form of
this integrated system and the ability for unmanned aircraft to collect data over any
area and have it produced as an accurate georeferenced model simply through
knowing the location and orientation parameters of the camera.
Until the previously mentioned technological advancements occur, in order to
optimise photogrammetric surveys the effect of the image qualities must be limited.
To ensure the data that is being collected in the form of images is as accurate as
possible the program WADE_flight2014 was developed with a successful test range
of several cameras and survey areas. The image considerations within the camera
such as internal camera calibrations, shutter speed, aperture, ISO sensitivity, focal
length and sensor details were all investigated. Formulas were derived for the use in
WADE_flight2014 and provide a way of planning a photogrammetric survey to
optimise time spent acquiring data and determine achievable accuracies. The use of
such a program when integrated with UAV flight systems would be economically
beneficial for any person hoping to undertake a survey. The program is currently in
the process of developing a text output file to provide coordinates for required
camera locations during flight to streamline the assimilation with pre-existing flight
planning software. The program is also able to be used for flight other than pointing
perpendicular to the ground as the principals do not change. It could be applied to a
long wall mine to measure geological formations or to plan a survey to model the
exterior of a large building. Principals presented in this thesis should be understood
by any photogrammetrist in order to produce accurate and reliable results
systematically. Even though it is quickly becoming very simple for anyone to
produce some form of three dimensional reconstruction, it still takes a
knowledgeable person to perform a photogrammetric survey accurately and
repeatedly.
4.1 Further Research
Listed below are topics of research that would improve the understanding on
photogrammetry and UAV applications as well as GPS and INS systems.
 The Benefits and differences between terrestrial (random coverage) style
photogrammetry and planned aerial imaging with calculated overlapping
data.
51 | P a g e
 A comparison of accuracies and applications of both full frame sensor
cameras and cropped frame sensors.
 Comparison of Fixed focal length lens systems verses Variable focal length
lenses for use in UAV photogrammetry.
 The accuracies and integration of GNSS/INS systems on lightweight UAVS
for the purpose of photogrammetric application.
 The accuracy of inbuilt internal camera orientation sensors and GPS.
52 | P a g e
5 Works Cited
2000-2013 Little Guy Media, 2002. The Canon EOS. [Online]
Available at:
http://www.robgalbraith.com/content_pagefcb5.html?cid=7-4806-4822
[Accessed 5 December 2014].
Ackerman, F., 1984. Utilization of Navigation Data for Aerial Triangulation.
International Archives of Photogrammetry and Remote sensing, 25(A3a).
AdamTechnology, 2012. ADAM Technology Team Blog. [Online]
Available at: http://www.adamtech.com.au/
[Accessed July 2014].
American Society for Photogrammetry and Remote Sensing, 2006. Manual of
Photogrammetry. Fifth ed. s.l.:s.n.
Balletti, C., Guerra, F., Tsioukas, V. & Vernier, P., 2014. Calibration of Action
Cameras for Photogrammetric Purposes. Sensors, 18 September.pp.
17471-17490.
Blaha, M., Eisenbeiss, H., Grimm, D. & Limpach, P., 2011. Direct Georeferencing
of UAVS. International Archives of the Photogrammetry, Remote
Sensing and Spatial Information Sciences, XXXVIII(1), pp. 1-6.
Center for Photogrammetric Training, 2008. History of Photogrammetry. [Online]
Available at:
https://spatial.curtin.edu.au/local/docs/HistoryOfPhotogrammetry.pdf
[Accessed 01 07 2014].
Chiang, K.-W., Tsai, M.-L. & Chu, C.-H., 2012. The Development of an UAV
Borne Direct Georeferenced Photogrammetric Platform for Ground
Control Free Applications. Sensors, 4 July.pp. 9161-9180.
Clayton, A. I., 2012. Remot sensing of subglacial bedforms from the British Ice
Sheet using an Unmanned Aerial System (UAS: Problems and Potential,
Durham University: Durham Theses.
Diefenbach, A. K., Dzurisin, D., Crider, J. G. & Schilling, S. P., 2006.
Photogrammetric Analysis of the Current Dome-Building Eruption of
Mount St. Helens Volcano. American Geophysical Union, Fall Meeting
2006, abstract #G53A-0870, December, p. A870.
Digital Camera World, 2012. 5 must-have menue tweaks for Canon users. [Online]
Available at: http://www.digitalcameraworld.com/2012/07/10/5-must-
53 | P a g e
have-menu-tweaks-for-canon-users/
[Accessed 12 October 2014].
Digital Photography Review, 1998-2014. DP Review. [Online]
Available at: www.dpreview.com
[Accessed May-December 2014].
Digital Photography School, 2006-2014. Is Full Frame Still the Best?. [Online]
Available at: http://digital-photography-school.com/full-frame-still-best/
[Accessed May-December 2014].
Doyle, F., 1964. The Historical Development of Analytical Photogrammetry. 2 ed.
s.l.:s.n.
Eisenbeis, H., 2009. UAV Photogrammetry, Dresden: ETH Zurich.
Erica Nocerino, Fabio Mnna, Fabio Remondino, Renato Saleri, 2013. Accuracy and
Block Defrmation AnaysAutatic UAV and Terrestrial Photogrammetry.
ISPRS Annals of the Photogrammetry, Remoe Sensing and Spatial
Information Sciences, 2-6 September, II-5/W1(XXIV), pp. 203-208.
ExposureGuide.com, 2014. Focusing Basics. [Online]
Available at: http://www.exposureguide.com/focusing-basics.htm
[Accessed June-December 2014].
ExposureGuide.com, 2014. Lens basics; Understanding Camera Lenses. [Online]
Available at: www.exposureguide.com/lens-basics.htm
[Accessed 20 December 2014].
G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013. Terrsetrial photogrammetry
without ground control points. Earth Science Information, 127(1), pp. 1-
11.
Ganjanakhundee, S., 2013. German map expert who surveyed Preah Vihear in 1961
returns. [Online]
Available at: http://www.nationmultimedia.com/national/German-map-
expert-who-surveyed-Preah-Vihear-in-196-30204411.html
[Accessed 9 October 2014].
Geodetic Systems Inc., 2014. What is Photogrammetry-The basics of
Photogrammetry. [Online]
Available at: http://www.geodetic.com/v-stars/what-is-
photogrammetry.aspx
[Accessed May-December 2014].
GPS photography.com, 2012. Digital SLR Trends In Military Photography. [Online]
Available at: http://www.gpsphotography.com/digital-slr-trends-in-
54 | P a g e
military-photography
[Accessed 8 December 2014].
Hartley, R. a. Z. A., 2004. Multiple View Geometry in Computer Vision. 2nd ed.
Cambridge: Cambridge University Press.
Herd, J., 2013. 3D Reconstruction from Video Data and Laser Scanning Technology,
Newcastle: Herd, James.
Hutton, J. et al., 2014. DMS-UAV Accuracy Assessment: AP20 With Nikon D800E,
s.l.: Applanix Corporation.
J.Skaloud, M.Cramer, K.P.Schwarz, 1996. Exterior Orientation By Direct
Measurement Of Camera Position And Attitude. International Archives
of Photogrammetry and Remote sensing, XXXI(B3), pp. 125-130.
Karara, H., 1998. Appendix - Camera Calibration. In: H. Karara, ed. Non-
Topographic Photogrammetry. s.l.:the American Society for
Photogrammetry and Remote Sensing, pp. 62-80.
Kniest, E., 2013. Stereophotogrammetry. s.l.:s.n.
Litwiller, D., 2001. CCD vs. CMOS: Facts and Fiction. Photonics Spectra, January,
pp. 1-4.
M.J. Westboy, J. Brasington, N.F Glasser, J. Hambrey, J.M. Reynolds, 2012.
Structure from Motion photogrammetry: A low cost, effective tool for
geoscience applications. Geomorphology, September, 179(1), pp. 300-
314.
macrominded.com, 2014. Understanding ISO Settings and Sensitivity. [Online]
Available at: http://www.macrominded.com/iso-settings.html
[Accessed 10 December 2014].
Michael Gruber, Roland Perko, Matin Ponticelli, 2012. THE ALL DIGITAL
PHOTOGRAMMETRIC WORKFLOW: REDUNDANCY AND
ROBUSTNESS. [Online]
Available at:
http://www.isprs.org/proceedings/XXXV/congress/comm1/papers/43.pdf
[Accessed June-December 2014].
NERC Science of The Environment, 2010. Intruments Wild RC 10. [Online]
Available at: http://arsf.nerc.ac.uk/instruments/rc-
10.asp?cookieConsent=A
[Accessed 15 October 2014].
55 | P a g e
Nikon Corporation, 2014. DSLR Camera Basics-shutter speed. [Online]
Available at: http://imaging.nikon.com/lineup/dslr/basics/04/03.htm
[Accessed June-December 2014].
Nikon USA, 2014. Understanding ISO sensitivity. [Online]
Available at: http://www.nikonusa.com/en/Learn-And-
Explore/Article/g9mqnyb1/understanding-iso-sensitivity.html
[Accessed June-December 2014].
Osborne, M., 2013. A Comparison Between Multi-View 3D Reconstruction and
Laser Scanning Technology, Newcsatle: Osborne, Matthew.
Philipson & Philpot, 2012. Remote Sensing Fundamentals. [Online]
Available at:
http://ceeserver.cee.cornell.edu/wdp2/cee6100/6100_monograph/mono_
07_F12_photogrammetry.pdf
[Accessed 2014].
Raizman, Y., 2012. Flight Planning and Orthophotos; Leaning Instead of Overlap.
GIM International, June.pp. 35-38.
Stojakovic, V., 2008. Terrestrial Photogrammetry and Application to Modelling
Architectual Objects. Architecture and Civil Engineering , 6(1), pp. 113-
125.
University of Newcastle, 2013. Stereo Photogrammetry. In: Photogrammetry.
Newcastle: Faculty of Engineering and Built Environment.
UVS, i., 2014. UVSI SESAR Proposal. [Online]
Available at: https://www.uavs.org/sesarju
[Accessed 2014].
Zisserman, A. a. H. R., 1999. Multiple view Geometry. [Online]
Available at: http://users.cecs.anu.edu.au/~hartley/Papers/CVPR99-
tutorial/tutorial.pdf
[Accessed Auguest-December 2014].
56 | P a g e
6 Appendix A
6.1 USER GUIDE WADE_flight2014
To begin planning a flight for a photogrammetric survey, ensure that the user has the
following information:
Camera Type
Camera Sensor Size
Calibrated focal length of lens
Camera Effective pixels
Camera resolution (max resolution to be used i.e. for Nikon D800 is 7360 x 4912)
6.1.1 To Begin
Open the Excel program Wade_fight2014
Check to see if the selected camera is already in the drop down menu
If camera does not appear then input the required details in the cells shown.
If information is not known use the link
shown in red text to research the specifications.
57 | P a g e
Once all Information is complete click the button to load the camera into
the program. The camera will then be available to select in the drop down list. Select
the camera and then complete the next step. The user will be required to specify a
lens with calibrated focal length that will be used with the new camera. This is
simply input into the cell shown below and by clicking the button.
6.1.2 Defining the survey Area
In order for the program to calculate a flight plan, the survey area must be defined.
This is done through the following steps.
If the Dx and Dy of the area are known, they may be input directly into the cells
shown below. Dx and Dy are shown in the figure of the map area below and refer to
the largest distances in both the x and y direction or (Longitude and Latitude)
between points within the survey area.
If the Dx and Dy are not known however, select the Coordinate information tab at
the bottom of the worksheet.
North
58 | P a g e
Using a program such as GoogleTM
Earth, locate the area to be surveyed and select
the corner boundaries in the North West, North East, South East and South West
corners so that the survey area is encompassed. Input these details into the
appropriate fields.
6.1.2.1 If using GoogleTM
Earth
The coordinates will be in the form of decimal degrees and may be input directly
into the following cells as appropriate.
6.1.2.2 If coordinates are in degrees minutes seconds
Input the degrees minutes and seconds into their respective fields as shown below.
The program will then calculate and convert these into decimal degrees and input
them into the above cells.
Additionally the user may specify a home point and input these coordinates into the
cell shown below.
Once all of the Coordinate information has been input, the program will show the
area to be surveyed graphically in relation to longitude and latitude lines and outline
the area with a rectangular box Shown below:
59 | P a g e
The program has calculated the Dx and Dy and displays them in the cells below.
In order to have these values input into the Flight planning parameters they can be
either manually input or by pressing the button this will be
automatically updated.
6.1.3 Selecting appropriate Flight plan
Once Dx and Dy have been input, the user is required to select from a drop down
list, the image epochs in seconds. (Time between each exposure) *this may be
determined by the Camera options.
After this has been selected, the same is done with the minimum flying height.
This is a desired minimum flying height and it can be altered after the flight plan has
been generated.
This appears as a drop down list and the user may select and value in 10m
increments from 10 to 200m.
60 | P a g e
The flight plan has been calculated and shows several options related to different
flying heights. These begin at the minimum flying height selected and increment up
in 5m increments to give 8 possible solutions.
From this point, the user is able to alter the Photograph epochs, minimum flying
height and if available, the lens to be used (changing focal length).
This will update the table shown above and allow the user to make an informed
decision based on all of the available information and what is required for the
particular survey.
*the coordinate text file output of the system is still under construction and will be
available early 2015. This will provide a file in the form of longitude, latitude and
height for UAV flight as selected by the user. The coordinates will be calculated off
the users selected flying height flight plan and will be available in .txt format to be
integrated into most flight system software.
61 | P a g e
7 Appendix B
7.1 Cameras used in Comparisons
Nikon D800
Effective Pixels 36 Megapixels
Max Resolution 7360 x 4912
Sensor Type CMOS
Sensor Size Full Frame (35.9 x 24mm)
Minimum Shutter Speed 30 sec
Maximum Shutter Speed 1/8000 sec
Aperture Priority Yes
Shutter Priority Yes
Manual Exposure Mode Yes
Self-Timer Yes (2 to 20 seconds, exposures 0.5,1,2, or 3
seconds)
HDMI connectivity Yes (HDMI mini)
Remote Control Yes (wireless or wired)
Orientation Sensor Yes
GPS Yes
Weight (Inc. batteries) 1000g
(Digital Photography Review, 1998-2014)
Canon EOS 6D
Effective Pixels 20 Megapixels
Max Resolution 5472 x 3648
Sensor Type CMOS
Sensor Size Full Frame (36 x 24mm)
Minimum Shutter Speed 30 sec
Maximum Shutter Speed 1/4000 sec
Aperture Priority Yes
Shutter Priority Yes
Manual Exposure Mode Yes
Self-Timer Yes (2 or 10 seconds)
HDMI connectivity Yes (HDMI mini)
Remote Control Yes (wireless or wired)
62 | P a g e
Orientation Sensor Yes
GPS Yes (built in)
Weight (Inc. batteries) 770g
(Digital Photography Review, 1998-2014)
Sony SLT-A99
Effective Pixels 24 Megapixels
Max Resolution 6000 x 4000
Sensor Type CMOS
Sensor Size Full Frame (35.9 x 24mm)
Minimum Shutter Speed 30 sec
Maximum Shutter Speed 1/8000 sec
Aperture Priority Yes
Shutter Priority Yes
Manual Exposure Mode Yes
Self-Timer Yes (2 or 10 seconds)
HDMI connectivity Yes (mini HDMI type c)
Remote Control Yes (wireless or wired)
Orientation Sensor Yes
GPS Yes (built in)
Weight (Inc. batteries) 812g
(Digital Photography Review, 1998-2014)

Weitere ähnliche Inhalte

Was ist angesagt?

Principle of photogrammetry
Principle of photogrammetryPrinciple of photogrammetry
Principle of photogrammetry
Sumant Diwakar
 
Orthorectification and triangulation
Orthorectification and triangulationOrthorectification and triangulation
Orthorectification and triangulation
Mesfin Yeshitla
 
Intelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designIntelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform design
eSAT Publishing House
 
Aerial photographs and their interpretation
Aerial photographs and their interpretationAerial photographs and their interpretation
Aerial photographs and their interpretation
Sumant Diwakar
 

Was ist angesagt? (19)

Photogrammetry chandu
Photogrammetry chanduPhotogrammetry chandu
Photogrammetry chandu
 
Introduction of photogrammetry
Introduction of photogrammetryIntroduction of photogrammetry
Introduction of photogrammetry
 
Photogrammetry
PhotogrammetryPhotogrammetry
Photogrammetry
 
Lecture 4 image measumrents & refinement
Lecture 4  image measumrents & refinementLecture 4  image measumrents & refinement
Lecture 4 image measumrents & refinement
 
An Automatic Detection of Landing Sites for Emergency Landing of Aircraft
An Automatic Detection of Landing Sites for Emergency Landing of AircraftAn Automatic Detection of Landing Sites for Emergency Landing of Aircraft
An Automatic Detection of Landing Sites for Emergency Landing of Aircraft
 
Principle of photogrammetry
Principle of photogrammetryPrinciple of photogrammetry
Principle of photogrammetry
 
Elements of Analytical Photogrammetry
Elements of Analytical PhotogrammetryElements of Analytical Photogrammetry
Elements of Analytical Photogrammetry
 
Aerial photogrammetry vs. terrestrial photogrammetry
Aerial photogrammetry vs. terrestrial photogrammetryAerial photogrammetry vs. terrestrial photogrammetry
Aerial photogrammetry vs. terrestrial photogrammetry
 
Photogrammetry 1.
Photogrammetry 1.Photogrammetry 1.
Photogrammetry 1.
 
Photogrammetry chandu
Photogrammetry chanduPhotogrammetry chandu
Photogrammetry chandu
 
Orthorectification and triangulation
Orthorectification and triangulationOrthorectification and triangulation
Orthorectification and triangulation
 
Intelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform designIntelligent two axis dual-ccd image-servo shooting platform design
Intelligent two axis dual-ccd image-servo shooting platform design
 
Aerial photographs and their interpretation
Aerial photographs and their interpretationAerial photographs and their interpretation
Aerial photographs and their interpretation
 
Historical Development of Photogrammetry
Historical Development of PhotogrammetryHistorical Development of Photogrammetry
Historical Development of Photogrammetry
 
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
 
Photogrammetry Survey- Surveying II , Civil Engineering Students
Photogrammetry Survey- Surveying II , Civil Engineering StudentsPhotogrammetry Survey- Surveying II , Civil Engineering Students
Photogrammetry Survey- Surveying II , Civil Engineering Students
 
Flight trajectory recreation and playback system of aerial mission based on o...
Flight trajectory recreation and playback system of aerial mission based on o...Flight trajectory recreation and playback system of aerial mission based on o...
Flight trajectory recreation and playback system of aerial mission based on o...
 
Height measurement of aerial photography
Height measurement of aerial photographyHeight measurement of aerial photography
Height measurement of aerial photography
 
BASIC CONCEPTS OF PHOTOGRAMMETRY
BASIC CONCEPTS OF PHOTOGRAMMETRYBASIC CONCEPTS OF PHOTOGRAMMETRY
BASIC CONCEPTS OF PHOTOGRAMMETRY
 

Ähnlich wie THESIS 2014

Innovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based PersonInnovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based Person
Austin Jensen
 
09.20295.Cameron_Ellum
09.20295.Cameron_Ellum09.20295.Cameron_Ellum
09.20295.Cameron_Ellum
Cameron Ellum
 
Ellum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping systemEllum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping system
Cameron Ellum
 
Chen Zeng (201375033)
Chen Zeng (201375033)Chen Zeng (201375033)
Chen Zeng (201375033)
Chen Zeng
 
Vehicle Testing and Data Analysis
Vehicle Testing and Data AnalysisVehicle Testing and Data Analysis
Vehicle Testing and Data Analysis
Benjamin Labrosse
 
3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection
Bruno Sprícigo
 

Ähnlich wie THESIS 2014 (20)

Visual odometry _report
Visual odometry _reportVisual odometry _report
Visual odometry _report
 
Innovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based PersonInnovative Payloads for Small Unmanned Aerial System-Based Person
Innovative Payloads for Small Unmanned Aerial System-Based Person
 
main
mainmain
main
 
Honours_Thesis2015_final
Honours_Thesis2015_finalHonours_Thesis2015_final
Honours_Thesis2015_final
 
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
Particle Filter Localization for Unmanned Aerial Vehicles Using Augmented Rea...
 
09.20295.Cameron_Ellum
09.20295.Cameron_Ellum09.20295.Cameron_Ellum
09.20295.Cameron_Ellum
 
Build Your Own 3D Scanner: Course Notes
Build Your Own 3D Scanner: Course NotesBuild Your Own 3D Scanner: Course Notes
Build Your Own 3D Scanner: Course Notes
 
Ellum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping systemEllum, C.M. (2001). The development of a backpack mobile mapping system
Ellum, C.M. (2001). The development of a backpack mobile mapping system
 
eur22904en.pdf
eur22904en.pdfeur22904en.pdf
eur22904en.pdf
 
Chen Zeng (201375033)
Chen Zeng (201375033)Chen Zeng (201375033)
Chen Zeng (201375033)
 
Vehicle Testing and Data Analysis
Vehicle Testing and Data AnalysisVehicle Testing and Data Analysis
Vehicle Testing and Data Analysis
 
Thesis_Prakash
Thesis_PrakashThesis_Prakash
Thesis_Prakash
 
MSc_Thesis
MSc_ThesisMSc_Thesis
MSc_Thesis
 
PhD_main
PhD_mainPhD_main
PhD_main
 
PhD_main
PhD_mainPhD_main
PhD_main
 
PhD_main
PhD_mainPhD_main
PhD_main
 
3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection
 
Intro photo
Intro photoIntro photo
Intro photo
 
Bike sharing android application
Bike sharing android applicationBike sharing android application
Bike sharing android application
 
Task-Based Automatic Camera Placement
Task-Based Automatic Camera PlacementTask-Based Automatic Camera Placement
Task-Based Automatic Camera Placement
 

THESIS 2014

  • 1. PHOTOGRAMMETRIC ANALYSIS: UNMANNED AERIAL VEHICLES & GLOBAL POSITIONING Mark Wade 3119753 December 2014 THESIS: University of Newcastle
  • 2. 1 | P a g e Abstract The purpose of this thesis was to investigate the parameters involved in photogrammetry and discover the uses for UAV technology in combination with photogrammetry. This was achieved by the following methods: Investigate and understand the geometric principals that form the algorithms and provide the basis for 3D scene reconstruction and modelling. Determine the accuracies required of measured auxiliary data in order to have accurate georeferenced scene models without the need for ground control points. Determine the camera and image related properties that have an effect on the overall model accuracy and seek to optimise these through the use of a simple flight planning program for photogrammetric surveys. The application of UAV photogrammetry is gaining momentum and as technologies improve and are downsized, the limitations of UAV surveys are diminished. There is sound basis to the algorithms relating to photogrammetry and 3D scene reconstructions. The theorised application of UAV photogrammetry with direct measurement of auxiliary data has been proved and developed by use of integrated systems however to date, produces raw measurements. The flight planning analysis, image properties and camera parameters that allow for accurate image acquisition have been investigated and a basic program developed to assist in determining suitable flight plans for photogrammetric surveys.
  • 3. 2 | P a g e Table of Contents Table of Contents........................................................................................................2 1 Introduction...........................................................................................................5 1.1 Background ...................................................................................................5 1.2 Applications of Photogrammetry ..................................................................6 1.3 Scope .............................................................................................................8 2 Literature Review .................................................................................................8 2.1 Definitions.....................................................................................................8 2.2 History of Photogrammetry.........................................................................10 2.2.1 Invention of Aerial Photogrammetry ...................................................11 2.2.2 How Aerial Photogrammetry works ....................................................11 2.2.3 Close Range Photogrammetry..............................................................13 2.3 Theory of Photogrammetry .........................................................................13 2.3.1 Structure from Motion ‘SfM’...............................................................14 2.3.2 Interior Camera Orientation.................................................................17 2.4 UAV’s and Aerial Photogrammetry............................................................17 2.4.1 Positioning and Camera Attitude .........................................................18 2.4.2 Measuring Auxiliary data.....................................................................20 3 Flight Plan Optimisation.....................................................................................25 3.1 Ground/Object Coverage.............................................................................26 3.2 Number of Photographs for UAV Survey...................................................29 3.3 Pixel Size and Accuracies ...........................................................................32 3.3.1 Planimetric Accuracy...........................................................................33 3.3.2 Height Accuracy...................................................................................35 3.4 Estimating Total UAV Survey Time...........................................................38 3.5 Camera Settings...........................................................................................41 3.5.1 Camera Sensors....................................................................................42 3.5.2 Shutter Speed .......................................................................................44 3.5.3 Aperture................................................................................................46 3.5.4 ISO Sensitivity .....................................................................................47 4 Conclusions and Recommendations...................................................................48 4.1 Further Research..........................................................................................50 5 Works Cited........................................................................................................52
  • 4. 3 | P a g e 6 Appendix A.........................................................................................................56 6.1 USER GUIDE WADE_flight2014..............................................................56 6.1.1 To Begin...............................................................................................56 6.1.2 Defining the survey Area .....................................................................57 6.1.3 Selecting appropriate Flight plan .........................................................59 7 Appendix B.........................................................................................................61 7.1 Cameras used in Comparisons ....................................................................61 Table of Figures Figure 1: Industrial Surveying point cloud 6 Figure 2: Industrial Photogrammetry accuracy............................................................6 Figure 3: Mount St. Helens in Washington, USA........................................................7 Figure 4: Transformations and Rotation matrix..........................................................9 Figure 5: Leonardo Da Vinci .....................................................................................10 Figure 6: Felix Nadir 1820-1910................................................................................11 Figure 7: Wild RC D10 Stereo plotter (NERC Science of The Environment, 2010) 12 Figure 8: Relationship between a stereo pair of images (Kniest, 2013) ....................12 Figure 9: Epipolar plane (Stojakovic, 2008)..............................................................14 Figure 10: SfM bundle adjustment geometry (Geodetic Systems Inc., 2014)...........15 Figure 11: Dig Tsho SfM data products Oblique view showing per-cell (1 m2) point densities. Data transformed to UTM Zone 45N geographic coordinate system........16 Figure 12: (Eisenbeis, 2009) ......................................................................................18 Figure 13: Professor Friedrich Ackerman (Ganjanakhundee, 2013) .........................19 Figure 14: Prototype of GPS receiver integrated with camera (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013).......................................................................................22 Figure 15: KINGSPAD layout (J.Skaloud, M.Cramer, K.P.Schwarz, 1996) ............24 Figure 16: Flight Optimisation for UAV surveys (Wade, 2014) ...............................26 Figure 17: Camera showing Sensor (Digital Photography Review, 1998-2014) ......26 Figure 18: Focal length and Principal point (ExposureGuide.com, 2014) ................27 Figure 19: Comparisons between sensor sizes (GPS photography.com, 2012).........28 Figure 20: Ground Sampling Distance (Digital Photography Review, 1998-2014)..29 Figure 21: Image overlap along a strip (AdamTechnology, 2012)............................30 Figure 22: Example for Image overlap (Wade, 2014) ...............................................31 Figure 23: Ground pixel size vs Dtarget (Wade, 2014).................................................32 Figure 24: Error ellipse (AdamTechnology, 2012)....................................................34 Figure 25: Gp, Planimetric accuracy and Height accuracy vs Dtarget (Wade, 2014)...34
  • 5. 4 | P a g e Figure 26: Comparison of lenses Sony SLT-A99 (Wade, 2014)...............................36 Figure 27: Number of images for different lenses (Wade, 2014) ..............................36 Figure 28: Sony SLT-A99 35mm focal length (WADE_flight2014)........................37 Figure 29: Sony SLT-A99 20mm focal length (WADE_flight2014)........................37 Figure 30: Dji Ground Station 4.0 Waypoint editing.................................................39 Figure 31: Input parameters for WADE_flight2014..................................................40 Figure 32: WADE_flight2014 outputs for survey of football field using Canon EOS 6D...............................................................................................................................40 Figure 33: Camera Comparisons (Wade, 2014).........................................................41 Figure 34: Camera Settings (Digital Camera World, 2012) ......................................42 Figure 35: CCD (left) and CMOS (right) image sensors...........................................43 Figure 36: Canon 5D Internal mechanisms (2000-2013 Little Guy Media, 2002)....44 Figure 37: WADE_flight2014 output showing all parameters ..................................45 Figure 38: Camera Aperture (ExposureGuide.com, 2014) ........................................46 Figure 39: Depth of field (ExposureGuide.com, 2014) .............................................46 Figure 40: Changing ISO sensitivity (macrominded.com, 2014) ..............................47 Figure 42: A2 flight controller components...............Error! Bookmark not defined.
  • 6. 5 | P a g e 1 Introduction 1.1 Background Aerial Photogrammetry has been used as a reliable source for mapping applications for several decades. Distances and elevations are determined through the use of formulas and complicated computer algorithms such as Structure for Motion (SfM) software packages. The high cost and time associated with aerial photography has limited its use on small scale applications. A small aerial photo set for use in volumetric calculations and other survey data can costs in excess of $2000 dollars and if data is corrupted or missed in the images it would require the aircraft to take the entire set again. As well as cost and time, the accuracy of aerial photogrammetry is limited and useful only when dealing with applications that require low accuracy. Recent advancements in digital photography, Global Navigation Satellite Systems (GNSS) and computer modelling software have made the process of gaining quality data from an aerial survey simpler and quicker. The applications are extensive for photogrammetry as it can be used to create realistic models of real world locations. With the addition of aerial vehicle technology this allows data to be acquired easily in remote sites. The technology of Unmanned Aerial Vehicles (UAV) has greatly improved over the past decade and is becoming more cost effective and accessible to civilians. According to the Unmanned Vehicle Systems International (UVSI) definition, “A UAV is a generic aircraft designed to operate with no human pilot on board.” (UVS, 2014). There are several variables to consider when combining UAV technology with photogrammetry. The defining elements of a survey will depend on the data required and the budget. This Thesis investigates the applications of photogrammetry combined with new UAV drone technologies for the purpose of acquiring accurate survey data that can be used for a variety of engineering, environmental and surveying purposes.
  • 7. 6 | P a g e 1.2 Applications of Photogrammetry The Benefits of photogrammetry in environmental and engineering applications are extensive and always increasing. The accuracy of the science has improved and presently, in the right environment with computer aided software its accuracies rival and even surpass that of other surveying instruments. Figure 1: Industrial Surveying point cloud Figure 1 shows a point cloud generated using a Structure from Motion (SfM) software package Agisoft PhotoScan. The photogrammetric survey was undertaken as part of a ‘check’ for an industrial survey at the University of Newcastle, NSW Australia in 2014. The images were taken at random with only a basic understanding of correct camera location and geometry. The results achieved after checking the coordinates were of high accuracy. Figure 2 shows the accuracy achieved (column 5) for the check. As shown, the errors were less than 0.5mm for most coordinates. Figure 2: Industrial Photogrammetry accuracy For a method that took a fraction of the time compared to the actual survey, these results are very promising for the automation and functionality of photogrammetric software.
  • 8. 7 | P a g e With photogrammetry in applications such as ongoing monitoring in areas like mine sites or natural activity zones, once the initial set up has been completed the monitoring can be far quicker and more cost effective than traditional surveying. Western Washington University graduate student Angela Diefenbach is pioneering a new method for the study and evaluation of active volcanoes and their risk for eruption. The monitoring team is using photogrammetry from static cameras, aerial photography and commercially available software to build accurate three dimensional models of the volcano. The models can then be compared to past models to determine the change in volume and rate of growth, key indicators of volcanic activity (Diefenbach, et al., 2006). Figure 3: Mount St. Helens in Washington, USA With the addition of UAV technology to photogrammetry, many previously unavailable or inhospitable areas are now able to be surveyed without the need for human interaction. Sensitive areas such as sacred indigenous sites and archaeological sites will benefit from the increasing technology advancements that are making UAV and photogrammetry invaluable. Accurate models and measurements can be realised on an untouched area and the UAV allows the images to be taken closer and at more angles than ever before. One example of this was in the Vent-i-mig-lia project Italy. An archaeological site where aerial and terrestrial photogrammetry was combined to achieve a Root mean square error of around 3mm in the X and Y coordinates of the model using only 5 control points (Erica Nocerino, Fabio Mnna, Fabio Remondino, Renato Saleri, 2013). In 2012 testing took place involving a UAV system and photogrammetry to track and model the growth and rate of the British Ice Sheet. The overall accuracy was found to be 0.5m, which, for the purpose of mapping low amplitude bedforms was suitable (Clayton, 2012).
  • 9. 8 | P a g e 1.3 Scope The scope of this thesis was to research the achievable accuracies of small UAV systems combined with GNSS/INS in photogrammetry without the use of GCP’s. This was achieved by looking at various publications and experiments already undertaken and investigating the theoretical limitations and constraints that affect the accuracy of field data. This thesis demonstrates how to optimise a flight plan for accurate and efficient data collection in order to streamline a photogrammetric survey. The production of several charts and simplified equations presented in this thesis assists in determining accuracy for a survey and allows a user to determine flight time and data required produce the desired results. The examination of camera and lens systems allows optimal combinations to be determined and the methods of determining an appropriate relationship for individual purposes is also presented. 2 Literature Review 2.1 Definitions Principal Distance: The principal distance is defined through internal camera calibration and it is defined as the focal length of a lens at infinity focus. It is the distance from the perspective centre of the lens to the image plane (Philipson & Philpot, 2012). Principal Point of Auto-Collimation: The PPA is the point on the image plane at which an image would originate if the focal plane of the camera was perfectly perpendicular to the direct axial ray coming through the perspective center of the lens (Karara, 1998). Fiducial Centre: Principal point or Indicated Principal Point IPP are also terms used to describe this internal camera parameter. It is defined as the location on the image plane or image
  • 10. 9 | P a g e sensor where intersecting rays from opposing fiducial marks intersect. This would ideally be located in the center of the image plane, however it is found during interior orientation calibration of the camera and lens and is affected by lens distortion. The distance rarely exceeds 1mm from PPA (x0, y0) to IPP (xp, yp) (Karara, 1998). Radial Lens Distortion: If the image formed by an ‘off-axis’ target is not in the position that a ‘perfect lens’ would produce but is either radially closer or further from the PPA then it is said to have been radially distorted. Generally radial distortion can be graphically represented for a lens with distortion in micrometers plotted against radial distortion in millimeters. For photogrammetric purposes this can be coincident with the PPA and symmetric. Nadir Point: The point on the image which corresponds to the ground nadir. The point at which a vertical plumb line from the perspective center of the lens to the ground nadir intersects the image. Rotation Matrix: Denoted as the matrix ‘R’ in collinearity equations for exterior camera orientation. The rotation matrix describes the degree of rotation of the camera about the 3 axes yaw, pitch and roll (ω, ф, К) or azimuth, tilt and swing (a, s, t). Often shown as the ‘m’ matrix. Figure 4: Transformations and Rotation matrix
  • 11. 10 | P a g e 2.2 History of Photogrammetry There is no universally accepted definition for the word photogrammetry; however a very apt description by W.D. Philpot from Cornell University explains photogrammetry as: “Photo-gram-metry” (light-drawing-measurement) “The art, science and technology of obtaining reliable spatial information about physical objects and the environment through processes of recording, measuring and interpreting photographic images and pattern of recorded radiant electromagnetic energy and other phenomena.” (Philipson & Philpot, 2012). Photogrammetry is the science of using photographs taken with certain specifications and observing measurements accurately from the images. The most popular and widely used applications for photogrammetry include land maps (aerial Photogrammetry), topographic maps (aerial photogrammetry) and three dimensional modelling (close-range photogrammetry). Close range and Aerial are the two categories that photogrammetry is usually divided into. This paper focuses on UAV photogrammetry from a multi rotor drone which is unique in that it can fit into both categories. The concept of photogrammetry dates back as far as 1480 where Leonardo da Vinci wrote; “Perspective is nothing else than the seeing of an object behind a sheet of glass, smooth and quite transparent, on the surface of which all the things may be marked that are behind the glass. All things transmit their images to the eye by pyramidal lines, and these pyramids are cut by the said glass. The nearer to the eye these are intersected, the smaller the image of their cause will appear” (Doyle, 1964). Over the next three to four hundred years the science slowly progressed and a few significant developments paved the way for it to be widely used and accepted once camera systems could produce the required images used in the measurements. There Figure 5: Leonardo Da Vinci
  • 12. 11 | P a g e were four main periods of development following roughly a 50 year cycle each progression (Center for Photogrammetric Training, 2008).  Plane table Photogrammetry from circa 1850 to 1900  Analogue Photogrammetry from circa 1900 to 1960  Analytical Photogrammetry circa 1960 to present and,  Digital Photogrammetry Present 2.2.1 Invention of Aerial Photogrammetry With the invention of photography and the ability to take exposures during flight, the amalgamation of this technology with the military quickly followed. In 1855 a balloon flying at a height of around 80m obtained the first aerial photograph. Approximately four years later the same pilot (Felix Nadir) was employed by the French Emperor Napoleon to acquire reconnaissance photographs in preparation for the Battle of Solferino. In order to transfer points from the photo to a map, a grid overlay was used and the battle plans were developed (Center for Photogrammetric Training, 2008). Figure 6: Felix Nadir 1820-1910 2.2.2 How Aerial Photogrammetry works Aerial Photogrammetry comes from images taken by a camera mounted in an aircraft usually directed at the ground. In order to obtain measurements from the images, a series of overlapping photographs are required along a flight path over the target area. In the early days before computer software became available, the images were processed in a stereo-plotter (an instrument that allowed an operator to view two photos in a stereo view). Figure 7 below shows an example of a typical stereo- plotter.
  • 13. 12 | P a g e Figure 7: Wild RC D10 Stereo plotter (NERC Science of The Environment, 2010) The images were aligned and the stereo-plotter allowed the operator tilt, rotate and scale the images using certain known details about each image. This process is now very much streamlined by the use of computer software and mathematical modelling quickly aligns the photos into useable ‘stereo pairs’. Once the rotations and scale have been corrected, measurements can be taken between the images to determine how far apart the camera positions were as well as measuring planimetric distances and elevations on the objects shown in the images. Figure 8 shows the relationship between a stereo pair of photographs. Figure 8: Relationship between a stereo pair of images (Kniest, 2013) This method is no longer used as computer programs provide faster and more accurate results, taking out any human error involved in the measurements. As technology in photography and flight systems have progressed, so too has the ability to create accurate maps using photogrammetry. The main benefit of aerial photogrammetry at present is that it has the ability to cover a large area far quicker than field a survey could. This however, comes at the sacrifice of accuracy as the vehicle is far away from the object and provides only crude measurements in small
  • 14. 13 | P a g e scale applications. A closer look into how aerial photogrammetry is used presently is presented in 2.4. 2.2.3 Close Range Photogrammetry Close range or Terrestrial photogrammetry is simply defined as any photogrammetry other than aerial. Typically the output of a close range photogrammetric survey is in the form of a 3D model/point cloud from which measurements can be deduced. Anything can be modelled from engineering structures, forensic scenes, mines, archaeological findings even living beings. Close range photogrammetry has the same origins as aerial due to the nature of using film and the camera. Precision advanced as methods for measurements and exposures were enhanced. Due to the relative closeness of objects from the lens, the resulting data is sharper, clearer and of higher quality when compared to aerial. As the methods for deriving the equations of measurements involves a distance from the object, precision and achievable accuracies also increase for close range. 2.3 Theory of Photogrammetry The human sight is based on a stereo view whereby the physical environment of an individual is given scale and depth through the intersecting light rays entering the eyes. The brain gathers the incoming information and recognises what it is seeing as a three dimensional space (Doyle, 1964). Structure from motion (SfM) computer software uses complex algorithms and mathematical models to replicate this process and define the location of objects within image. The software provides an arbitrary coordinate system to relate the image points. The interior calibration of the camera and lens system is important as this defines the reference coordinate system that photogrammetric measurements are based on. Most SfM programs have an interior camera calibration function built in to simplify this process. By calibrating a camera and lens, details such as principal distance, scale, fiducial centre (principal point) and lens distortion can be determined under varying conditions. In order to model a two dimensional image in three dimensions, a reference coordinate system must be known between pairs of images. This coordinate system can be deduced using the known camera and image parameters. Epipolar planes are found and traced on the images. An epipolar plane intersects the fiducial centre of
  • 15. 14 | P a g e the image and the software also locates on the intersecting plane reference points (Stojakovic, 2008). This can be completed for any points that appear on more than one image for the entire model. Through this process, known as a bundle adjustment, the 3D model is created on some arbitrary coordinate system. Figure 9: Epipolar plane (Stojakovic, 2008) It is theorised that the precision of calibrated images may be increased by considering right angles and locating points in the object space on these right angled surfaces (Stojakovic, 2008). 2.3.1 Structure from Motion SfM is a method of creating realistic three dimensional models by estimating the object space and geometry and calibrated internal camera specifications. This is the same principal as stereo photogrammetry however it uses a dataset usually with large redundancies and overlapping images from a variety of locations and angles around the target object. The software can be either fully automated or partially automated requiring user input. In order to create a 3D model the software requires at least three images and typically follows a process with the following steps.  Image acquisition and key point extraction  Creation of 3D geometry in point cloud  GCP location input  Aligning images  Mesh generation  Orientation and translation onto a geo-referenced coordinate system (optional)
  • 16. 15 | P a g e Figure 10: SfM bundle adjustment geometry (Geodetic Systems Inc., 2014) The reconstruction of an object/scene is created by using the first pair of stereo images to create a model. SfM uses the algorithm to identify pairs of images, and rotations/scales of the images, in relation to the known camera and lens parameters. Once this has been completed the entire image dataset is included in to the algorithm which adds detail and structure to the model. The point cloud is then generated in the following steps and the positions of the camera stations and camera parameters are resolved. In the early 1980’s Hugh Christopher Longuet-Higgins discovered if there were enough similar points observed between a pair of images, the camera position and orientations could be determined using a set of simultaneous linear equations, and a least squares solution. Due to the number of points required being 8 (4 similar points in two images) this was known as the eight-point algorithm. This method was ground-breaking at the time of its inception. It allowed for researches to study, enhance and expand on Longuet-Higgins two-view reconstruction to 3, 4 and n-view algorithms (Hartley, 2004). SfM models are based on the n-view reconstruction algorithms and can vary from software to software. Depending on certain aspects of the algorithm, they are more suited to certain types of models and photogrammetric reconstructions. The algorithms all stem from the simple epipolar type lines between images that were used in stereo-photogrammetry. In Hartley & Zisserman’s book ‘Multiple View Geometry in Computer Vision’ they detail a few of the various methods of reconstruction in great detail. If the distance from the camera station to the scene is large relative to the depth within the scene there is an effective method to compute the geometry of the scene. To reconstruct a scene from n-views, a simplified camera model known as the affine camera is used to approximate the perspective projection. If a set of points are
  • 17. 16 | P a g e visible in a set of n-views involving the affine camera, an algorithm known as the factorisation algorithm can be used to compute the geometry of the scene and the specific camera models in one step (Hartley, 2004). The downfall of this model is that all points must be visible in all views which for most aerial photography is not economical. The process known as Bundle Adjustment was developed and is now the dominant methodology for 3D scene reconstruction for n-view models. The bundle adjustment procedure attempts to fit a non-linear model to the images and points. The benefit of this method is that it is a simplified and generalised procedure that has the ability to be applied to a wide variety of problems. Due to the iterative nature of the bundle adjustment however it is not guaranteed to converge on the optimal solution from an arbitrary starting point. In order to rectify this problem however, most SfM packages use an initialization step before the bundle adjustment to compute a ‘best guess’ starting point for the algorithm (Hartley, 2004). Depending on how well the initialisation process is completed will determine the speed of the iteration of the bundle adjustment however with computer power ever increasing; these processes are becoming quicker regardless of the starting point. An example of the output from a SfM package is shown in Figure 6 Figure 11 . It shows Dig Tsho moraine-dam complex in the Khumbu Himal, Nepal captured using camera stations located around the site. 1649 images and 35 GCP’s were used to create the reference frame of the 3D model. The dense reconstruction of the model created a point cloud of 13.2 x 106 points. Processing took 22 hrs. The data was geo- referenced using GPS information collected (M.J. Westboy, J. Brasington, N.F Glasser, J. Hambrey, J.M. Reynolds, 2012). Figure 11: Dig Tsho SfM data products Oblique view showing per-cell (1 m2) point densities. Data transformed to UTM Zone 45N geographic coordinate system.
  • 18. 17 | P a g e 2.3.2 Interior Camera Orientation All camera lens systems have imperfections which to the naked eye would not be noticed. When dealing with very fine detail in photogrammetry and angles between epipolar lines it is important to note how these imperfections have an effect on the direction of the rays through the lens. Karara (1998) said “Interior orientation is the term used to describe the parameters which model the passage of light rays through the lens and onto the image plane”. The Camera calibration provides the transformations between an image point and the light ray-in what is referred to as the Euclidean 3 space-as a value ‘k’. (Zisserman, 1999). Euclidean 3-space is simply a representation of the three dimensional coordinate spaces in which the camera parameters may be observed. The ‘k’ term is able to be introduced as a subject in the polynomial series-shown in equation (1) in order to determine the radial distortion of a lens. (1) Karara (1998) States that “For most lenses three coefficients [k1, k2, k3] are sufficient to describe the distortion curve completely, but for exceptional lenses such as the ‘fish-eye’ up to five coefficients may be required.” The document (Zisserman, 1999) describes in detail how the matrices and algorithms of lens calibration are derived as well as research completed by Balletti, et al., (2014) where comparisons between various calibration techniques was investigated, however it is not essential to know for the subject of this thesis. It is sufficient to understand how the lens distortion parameters must be known and defined in order to reduce errors in measurement and achieve the most effective and accurate results for a photogrammetric survey. 2.4 UAV’s and Aerial Photogrammetry Unmanned Flight system technologies have seen exponential growth over the past decade, resulting in highly autonomous and accessible machines capable of a large range of functions. Figure 12 shows the relationship between object area size and accuracy using UAV’s compared to other forms of photogrammetry in 2006.
  • 19. 18 | P a g e Figure 12: (Eisenbeis, 2009) This figure can be altered now due to the progress UAV technology has made over the past several years. Advancements in accuracy and flight navigation software have made the possibility of using UAV systems in photogrammetry, an economical and viable option for a number of applications. Real time positioning using GNSS receivers mounted to the body of a UAV increases the positioning accuracy of the camera station at the time an image is captured. This increases the overall accuracy of the geo-referenced model. 2.4.1 Positioning and Camera Attitude Photogrammetry and SfM packages currently work with control points that have been located and coordinated before the model is created in order to scale and geo- reference the data. From the known positions of GCP’s and the interior calibration parameters of the lens and camera, the position and orientation of the image stations are determined. An Idea was proposed by German, Friedrich Ackerman in 1982 in his paper ‘Utilisation of Navigation Data for Aerial Triangulation’. He proposed that if one could accurately measure camera orientation data in conjunction with GPS
  • 20. 19 | P a g e navigation recorded during a flight, aerial triangulation techniques could become obsolete. “It was also demonstrated that the utilization of such auxiliary data in block adjustment is highly effective…further research is required and encouraged into GPS and Inertial navigation systems…” (Ackerman, 1984, p. 7). Ackerman undertook an experiment in 1982 named Bodensee to prove his theory of the importance of this auxiliary positional data. The test included five flight strips over Lake of Constance covering an area of 480km2 . Beacons around the site gave a reference to the aircraft for navigational data which was then post processed to give x-y coordinated of the camera stations for each image. The overall trilateration adjustment was found to have an average precision in the order of 1m. Ackerman was impressed with the result and could see that the only limitations of his theory would be in the auxiliary data. Figure 13: Professor Friedrich Ackerman (Ganjanakhundee, 2013) “..ground control points can only be deleted completely when constant or systematic errors of the auxiliary data are negligible or are calibrated otherwise, the example demonstrates convincingly the effectiveness of auxiliary positioning data and the success of joint block adjustment.” (Ackerman, 1984, p. 7).
  • 21. 20 | P a g e 2.4.2 Measuring Auxiliary data The concept of using the auxiliary data to negate the need for GCP’s has continued to be an interesting area for experimentation since Ackerman’s Bodensee project. The formulae that photogrammetric software is based on indicate that it can be achieved mathematically to determine an objects size, shape and location without using the points on the object itself. The Auxiliary data consists of two parameters containing 6 unknowns for each camera position.  GPS position (X, Y, Z)  Camera attitude ( pitch, yaw and roll) In order to achieve the best possible adjustment, the two parameters need to be measured accurately and at very specific times. For example, in Bodensee, Ackerman and his staff were able to measure GPS position with a variance in coordinate of 2.2m (Ackerman, 1984, p. 6). This resulted in a 1m overall precision of coordinated points. If a measurement is not recorded at the exact time an image is taken, the errors can be substantial as the plane or UAV is traveling at speed. If a UAV was flying at 5ms-1 and the time of GPS coordinate was 1/10th of a second after the image was taken, the camera station would be out by 0.5m. Image timing is discussed in chapter 3.4 of this paper. In the mathematical model of a bundle adjustment the collinearity equations are: (2) (3) (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013) Where: ξ & η Image coordinates ξ0, c & η0 Interior orientation parameters of the camera x, y, z Ground coordinates of a point x0, y0, z0 Perspective centre coordinates
  • 22. 21 | P a g e rij Attitude matrix (rotation matrix from object system to camera system with elements rij) shown in chapter 2.1. It is important to note that when the GPS position is recorded, the antenna phase centre is not located in the same position as the image centre. An observation equation needs to be involved in the adjustment in order to relate the camera position to the GPS antennae phase centre (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013, p. 4). (4) Where: xa The positions of the GPS antenna phase centre at time of exposure in a cartesian/local coordinate system. x0 The coordinates of the lens sensor e The offset of the GPS in the image space (fixed) (5) R Is (as above) the attitude/rotational matrix at time of exposure t It is important that the vector ‘e’ is determined via a calibration and that it stays constant. This way the only unknown in the equation is the coordinate of the image centre (perspective centre). The unknown is easily solved for during the bundle adjustment. Integration of a useable system for terrestrial photogrammetry has been shown to be quite accurate without using ground control points. The article ‘Terrestrial photogrammetry without ground control points’ examines a device where a camera was mounted on a pole with GPS receiver on top. The eccentricity vector was calculated accurately using a total station and was assumed as constant with respect to the rotations of the camera (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013, p. 6).
  • 23. 22 | P a g e Figure 14: Prototype of GPS receiver integrated with camera (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013) Table 1: Accuracy at tie points and check points (G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013) In 1996 a paper was published by the University of Stuttgart along with the University of Calgary titled ‘Exterior Orientation by Direct Measurement of Camera Position and Attitude’ which has an in depth look into how the measurement of axillary data is affected and how it affects the block adjustment and accuracy of a model. The investigation involved airborne data acquisition and a GPS/INS system that was designed for the purpose of recording this data. Their system into account the GPS and camera mis-orientations with the INS system as well as the displacement vectors related the camera imaging centre. The equations in the form of a matrix calculation took the form: The study and application found that the GPS and attitude data has a very strong theoretical application to improve the reliability and accuracy of a bundle adjustment; however they believed that there still needs to be greater reliability of GPS positioning data. Table 1 shows the accuracies they were able to achieve using photogrammetry on a variety of objects.
  • 24. 23 | P a g e (6) (J.Skaloud, M.Cramer, K.P.Schwarz, 1996, p. 127) Where: (Xp, Yp, Zp) and xp r , yp r are the point coordinates in a geodetic reference system and the reduced image coordinates in the photograph respectively. (X0, Y0, Z0) are the coordinates of the camera perspective sensor in the reference frame. a is a point dependent scale factor f is the lens focal length Rb m (ω, φ, κ) is a 3D transformation matrix which rotates the camera frame into the GPS reference frame. dRp b = f(δ1, δ2, δ3) is the constant mis-orientation vector that is between the INS system and the imaging sensor. The solution to this can be obtained by using an in-flight calibration. The major assumption is that the imaging sensor, GPS antenna and the INS system stay fixed in their orientation and relative position. In a UAV where the camera is able to move around freely with the gimbal, any INS sensors would need to be located on the gimbal to determine the orientations at each point in time. This also changes the relative orientation between the GPS antenna and the INS system which would need to be derived for each exposure time. In order to solve equation (6) the GPS/INS derived positions and attitude need to be defined along with the sensor calibration (interior orientation), allowing all terms on the right hand side of the equation to be known. This enable the image point coordinates to be represented in the object space and georeferenced. This is how the auxiliary data achieves a georeferenced model without the need for ground control points. The difficulty lies in how to accurately determine each of the terms on the right hand side of the equation and at exactly the same time as each exposure is taken. The investigation of implementing this system called KINGSPAD (KINematic Geodetic System for Position and Attitude Determination) shown below in Figure 15 was undertaken in 1995 as mentioned in the above journal article (J.Skaloud, M.Cramer, K.P.Schwarz, 1996).
  • 25. 24 | P a g e Figure 15: KINGSPAD layout (J.Skaloud, M.Cramer, K.P.Schwarz, 1996) The results showed that there were large discrepancies in GPS position as there were only 4-5 satellites. The errors in selected coordinates of ground control had a standard deviation of 0.3m horizontally and 0.5m vertically flying at a height of 900m. The standard deviation of the GPS and attitude measurements are shown in Table 2. Again the results were positive and the main limiting factor was the acquisition of derived GPS/INS positions at the right times. Based on the 1996 paper, if the experiment were to be reproduced with a similar system with present technology, the GPS data should be far more reliable and increase the accuracy substantially. Table 2: Measurement std dev. of GPS/INS system (Wade, 2014) Camera Parameter σ of measuring errors Control point σ of errors Pitch (ω) 1’48” - Yaw (ϕ) 36” - Roll (к) 1’48” - Easting (X) 0.15m 0.3m Northing (Y) 0.15m 0.3m Height (Z) 0.2m 0.5m
  • 26. 25 | P a g e 3 Flight Plan Optimisation The use of UAV systems in surveys has been proven to be cost effective and with the improvement of the previously mentioned GPS/INS integrated systems, the accuracies of the produced models will be highly beneficial in a variety of applications. This thesis sought to identify a means of planning a UAV flight in order to survey specified areas and give detailed analysis of how to optimise the survey and achieve required accuracies. With the expansion of the UAV market into commercial and also personal use, there are a great deal of customised packages and options to suit different budgets, applications and experience levels. This paper investigates flight planning using multi rotor UAV systems which can range from a $300 to over $100,000. The difference between systems of different price brackets can be from battery time, range or accuracy of an in-flight system or software. Research was based on the Vulcan Hexacopter (1.08m diameter carbon fibre frame) which was purchased by the University of Newcastle, NSW, Australia for approximately $15,000 AUD. The performance of the Vulcan Hexacopter provides accurate output for a mid-range device. After relationships were found between the variable aspects of a UAV survey, a simple program, WADE_flight2014, was developed to allow a user to select an area using a widely available mapping program such as GoogleTM earth. Selecting a camera system to attach to the hexacopter, selecting the lens (fixed focal length is recommended for best results) and selecting a minimum flying height and timing of exposures. The program provides a number of options for flight plans as shown in Figure 16 allowing the user to select the appropriate option according to their specific restrains/requirements. The user guide for Wade_flight2014 can be viewed in Appendix A.
  • 27. 26 | P a g e Figure 16: Flight Optimisation for UAV surveys (Wade, 2014) 3.1 Ground/Object Coverage It is important to know how the internal workings of a camera affect its ability to cover the object that will be surveyed. Simple trigonometry and similar triangles are best used when describing ground sampling distance GSD. When an image is exposed and the photograph is taken, the internal sensor of the camera gathers all of the light information that is let in through the lens and spreads the light rays out onto the sensor plane effectively reversing and condensing whatever was in the frame of the image onto a small sensor made up of millions of little pixels. Figure 17 shows where the sensor is located within a digital single-lens reflex (DSLR) camera Figure 17: Camera showing Sensor (Digital Photography Review, 1998-2014) Camera Sensor
  • 28. 27 | P a g e The similar triangles come into consideration when the focal length is known. The focal length is the distance between the optical centre of the lens and the principal point/focal point where the light hits the sensor. It is given in details on each lens as specified by manufacturer. In order to fine tune and find the exact focal length for a particular lens and camera pairing, an internal calibration is completed. The calibrated focal length is the adjusted focal length after the radial lens distortion has been averaged. Figure 18: Focal length and Principal point (ExposureGuide.com, 2014) Once the calibrated focal length is found, the distance to the object/ground is related as shown in Figure 20. The benefit of using a GPS controlled UAV system is that the distance to the object can be somewhat defined in the flight plan and therefore this becomes a “known” factor in the similar triangle equations. The other important characteristic of the camera equipment is the size of the sensor. This paper is focused on full frame sensors which are the professional and more expensive DSLR units. The approximate starting price for a full frame camera begins at $2000 AUD. A crop factor needs to be applied to the focal length before any calculations can be done for GSD if a smaller sensor is used. Comparisons between sensors can be seen in Figure 19 below. Full frame sensors in a DSLR camera have a variety of advantages and also some disadvantages for use in photography. One disadvantage is the weight comparison to their smaller sensor counterparts. An advantage of a full frame sensor is due to being larger, the pixel size is increased. This enables more light to be captured by each pixel, allowing greater amount of light to be captured before the photodiode is oversaturated. Less noise is also present from neighbouring pixels. These attributes conclude in a higher quality image at differing light and contrast
  • 29. 28 | P a g e situations which is helpful for the type of surveys that may be undertaken with a UAV. These attributes are also touched on later in chapter 3.5.1. Figure 19: Comparisons between sensor sizes (GPS photography.com, 2012) The full frame sensor is 36mm x 24mm and will have a different amount of pixels depending on camera manufacturer and model. The formula for working out how much GSD will be covered in each image is shown below. (7) Where: D (m) is the distance to the object/ground from the camera Ssx (m) is the sensor size in the x direction (i.e. for full frame Ssx is 0.036m) Ssy (m) is the sensor size in the y direction (i.e. for full frame Ssy is 0.024m) f (m) is the focal length (calibrated) for the lens.
  • 30. 29 | P a g e Figure 20: Ground Sampling Distance (Digital Photography Review, 1998-2014) For greater object coverage at the same distance from object, a smaller focal length lens can be used. For example, if using a Nikon D800 Camera with a 20mm fixed focal length lens at a distance of 20m from the ground, GSD = 864m2 . If the lens were changed to a 24mm fixed focal length at the same distance of 20m the GSD is only 600m2 . These differences can play a major role in determining which lens and camera is best to use for a particular survey as a shorter focal length will allow for greater object coverage meaning less photographs and less flying time. It would be recommended however, that the focal length should stay above 15mm as there can be greater lens distortions with wider angle (short focal length) lenses. 3.2 Number of Photographs for UAV Survey The next factor of a UAV survey is how many images need to be taken for a specified area. The image count will greatly influence the quality of the resulting 3D model when using SfM software packages. As mentioned in Invention of Aerial Photogrammetry 2.2.1 there needs to be an overlap between each consecutive image in a flight line. Due to the nature of the mathematical model used, it is important to maintain a minimum overlap so that the resulting 3D model is complete and does not have ‘holes’ within it. This paper will look at overlap in forward and side overlapping images as this is where the research has been completed for the thesis. Upon late research however an article was discovered ‘Flight Planning and Orthophotos; Leaning Instead of Overlap’ (Raizman, 2012). This article describes a method of using a camera and lens field of view angle and building leaning within
  • 31. 30 | P a g e images to determine overlap parameters as a more accurate method for flight planning. When determining overlap of images it is important to consider reasons as to why a particular overlap is required. Overlap is generally expressed in a percentage. Traditionally, aerial photography used a 60% overlap between images along a strip and 25% overlap of flight lines. These allowed for every point on the ground to be captured 2.5 times. Film was expensive and image capturing was slow. With today’s technology and systems, capturing images has very little cost factor on a photogrammetric survey and therefore we can afford to take extra images to allow for redundant data. This thesis follows a principal of 80% overlap for images within a strip and 60% overlap between flight lines. This will capture each point on the ground/object 5 times and provide greater redundancy than the 60% / 30%. Using the 80% overlap it would take 3 bad or unusable images in a row to create a hole in the resulting model (AdamTechnology, 2012). Figure 21: Image overlap along a strip (AdamTechnology, 2012) Due to having redundant data with the 80% overlap, if all of the images from the survey are useable, every second image may be removed to speed up processing time and the model will be unaffected (Michael Gruber, Roland Perko, Matin Ponticelli, 2012). Of course, the greater overlap that is used, the higher quality the model will be at the expense of processing time and flight time. Image overlap will also have an effect on the expected accuracies of a model which is discussed in chapter 3.2. Once an overlap is determined, the number of images required to cover an object/area becomes just a function of the GSD and the required survey area. (8) Where: (9)
  • 32. 31 | P a g e (10) This calculation is best described by an example shown below relating to Figure 22 Figure 22: Example for Image overlap (Wade, 2014) In the above image a football field is shown. This field has the parameters of 100m in length and 60m in width corresponding to the Dx and Dy respectively from equation (8). If the Nikon D800 camera with a 20mm lens is used, flying at a height of 20m above the field, equations (9) & (10) become: Δx = 7.2m, Δy = 9.6m This means that each image along the strip will require the perspective centre to be 7.2m apart and the flight lines will have a 9.6m distance between them. This means that the survey will require 13.8 images to cover the length of the field with 6.25 strips of images to cover the width of the field. Due to the numbers not being whole it is important that these numbers are rounded up to the next whole number otherwise the overlap will be sacrificed. The result to equation (8) will then be the product of images per strip and number of strips. In the case of the football field this will be 98 images. The flying height will change the number of images drastically but will also have an effect on the achievable accuracy discussed in chapter 3.3. For example if the flying height was increased by 10m to 30m above the field equation (8) gives 50 images for total coverage. Conversely if the lens were changed to a longer fixed focal length, say 24mm, the survey would require 136 images at a height of 20m to achieve the desired overlap. Dx Dy
  • 33. 32 | P a g e 3.3 Pixel Size and Accuracies When a photogrammetric survey produces data, it can be in the form of a three dimensional model/point cloud or orthomosaic photo. An important feature of the produced models or images is the point density and also ground pixel size (Gp). If measurements are to be taken from the model or features need to be determined, Gp is critical. For the archaeological site mentioned in chapter 1.2, it was important to have a very fine detail of pixel quality and size in order to determine specific structural details of the site to give key information to the archaeologists. The error in measurement comes when zooming into an object within the image. If using GCP’s, the placement of these will be effected by Gp and therefore the accuracy of the model also. Ground pixel size is a function of the distance from the target Dtarget, focal length of lens, megapixel (MP) value of the camera and the sensor size. The relationship between Gp and distance from object/ground is a linear one and therefore the ground pixel size will increase the further you move the camera away from the subject. Figure 23 shows the linear relationship between Gp and Dtarget using a full framed Canon EOS 6D DSLR camera with a 28mm lens. Figure 23: Ground pixel size vs Dtarget (Wade, 2014) The formula for determining the size of the Ground pixel is shown in equation (11). (11) Where: f is the focal length of the lens Ssx is the sensor size in the x direction (i.e. 36mm for a full frame sensor) pixelsx is the number of pixels across the sensor in the x direction 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 0 10 20 30 40 50 60 70 80 90 100110120130140150160 Ground pixel size (mm) Dtarget (m) Canon EOS 6D Pixel size Ground 28mm lens
  • 34. 33 | P a g e The pixelsx value is usually given in the product specifications by the manufacturer. Even though the side of the sensor in the ‘y’ direction is only 24mm the pixel count is also less and the ratio between the two works out the same as the ratio of the x direction, this is due to the pixels being square. Pixels are in the order of µm’s. The Canon EOS 6D has pixels of 6.5µm whereas the Nikon D800 has pixels of 4.8µm. In the above example the Canon EOS 6D shown in Figure 23, if a Dtarget of 20m is selected, the Gp will be equal to 4.7mm. The longer focal length in this case lowers the size of the Gp as it is effectively ‘zooming’ in closer to the object whilst losing GSD coverage. By changing the lens to a 24mm focal length, the GSD increases providing greater coverage and requiring less images to complete a survey, however the pixel size increases to 5.5mm for the same Dtarget. This seems a small difference, however at large scales and for a larger Dtarget the difference becomes noticeable if resolution is of high importance in a model. 3.3.1 Planimetric Accuracy As with all surveying practices and photogrammetric surveys, the importance of determining expected accuracies is highly integrated into the planning procedure. From a theoretical point of view this is achieved through manipulation of the available specifications related to the camera, lens and object. Before completing a survey it is suggested that time is taken to define the requirements of accuracy given by the client or indeed advise the client of the costs associated in respect to fulfilling the condition of accuracy specified. The program developed along with this thesis gives a range of expected accuracies shown as a standard deviation in both planimetric (planar x-y or Easting-Northing) and height (depth). Before calculating the accuracy of a survey, it is helpful to understand the mathematical reasoning. When two or more images are taken in sequence, their rays of sight will intersect at varying degrees creating an error ellipse for each point on the ground that is intersected. Figure 24 shows a typical error ellipse that is created for a point.
  • 35. 34 | P a g e Figure 24: Error ellipse (AdamTechnology, 2012) To determine the planimetric accuracy the distance along the error ellipse in the plane cutting the error ellipse at right angles to the view direction is found and is given by equation (12). The accuracy of the pixel in the image sensor is determined by the quality of the image and cannot easily be specified as it involves variables like noise and blur and the accuracy of the camera calibration. A safe method for determining planimetric accuracy is to give the pixel error as 0.5 pixels as this is a safe estimate and with some basic knowledge of photography this will be easily improved to below 0.3 pixels. Further detail about pixel error is discussed in chapter 3.5. As with any expected error when dealing with a client it is important to be conservative in the estimate so as to leave room for error. As the pixel error can be seen as a constant 0.5 pixels the equation takes the form of equation (12). (12) As with the Gp there is a linear relationship between planimetric accuracy and Dtarget. Figure 25: Gp, Planimetric accuracy and Height accuracy vs Dtarget shows how pixel size, planimetric accuracy and height accuracy are related to each other and Dtarget. As Gp is affected by focal length and distance to object so too is the planimetric error in ratio to the pixel accuracy. Figure 25: Gp, Planimetric accuracy and Height accuracy vs Dtarget (Wade, 2014) Planimetric Accuracy
  • 36. 35 | P a g e 3.3.2 Height Accuracy Most photogrammetry applications are more concerned with height accuracy or error as this is the most difficult and sensitive parameter to refine. As images are taken from a distance to an object and in a 2 dimensional form, the ability to determine accurate height information from a bundle adjustment always seems to be the most difficult portion of the survey. As the semi major axis of the error ellipse in Figure 24 the height accuracy is affected by the following factors. Dtarget(m), Δx(m) and planimetric accuracy. Δx as defined in equation (9) is also a factor of desired overlap, sensor size and focal length. It is the distance between perspective centres of the camera stations along a strip of images. By manipulating the desired overlap and increasing the Δx (sometimes referred to as the ‘base’ distance) the error ellipse will become more circular and as a result, the height accuracy will improve. Equation (13) gives the expected height accuracy. (13) It was discovered , by changing the focal length in the hope that the base distance Δx would alter enough to improve the height accuracy something unexpected occurred. Experimenting with the program a Sony SLT-A99 camera was selected with a 35mm focal length lens (found as a compatible lens on Sony’s website). The Height accuracy at a distance Dtarget of 50m was 21mm and planimetric accuracy of 4.3mm.Changing the lens to a lower 20mm lens increased the base distance Δx from 10.3m to 18m and increased the planimetric accuracy as expected to 7.5mm. However it was interesting to note that due to the relationship of focal length, pixel size and base distance, the height accuracy changed by 0.1mm which is negligible. What this enables is that as long as the planimetric accuracy stays within the requirements the lens can be altered to a lower focal length allowing far less images to be captured to cover the same area while still maintaining the same height accuracy. Figure 26 shows the changes for planimetric, height and pixel size. Figure 27 shows the difference in images required for the two different lenses on a sample area of 300m by 100m flying at 50m above the ground.
  • 37. 36 | P a g e Figure 26: Comparison of lenses Sony SLT-A99 (Wade, 2014) Figure 27: Number of images for different lenses (Wade, 2014) The benefit of fewer images can be a major factor in determining which lens to use for a particular survey if the choice was available. The other factors would also have to be weighed up however such as model quality due to the lower planimetric accuracy and pixel size. The only way to improve height accuracy is to either decrease the flying height of the UAV (meaning more images need to be taken) or to reduce the overlap of images which may risk losing redundant data and a poor model. Due to this, where the lower focal length lens reduces the number of images 0 5 10 15 20 25 Planimetric Accuracy Height Accuracy Pixel Size millimetres Sony SLT-A99 35mm lens 20mm lens 0 50 100 150 200 250 300 350 Images required for sample area Numberofimages Sony SLT-A99 35mm lens 20mm lens
  • 38. 37 | P a g e required, the option to reduce the flying height such that the number of images required is the same-or similar-to that of the 35mm lens. At 50m above the ground the 35mm lens required 300 images on the sample area. By reducing the flying height to 30m with the 20mm lens, only 245 images are required and the height accuracy has improved to 12.5mm, planimetric accuracy is 4.5mm which is the same as for the 35mm lens at 50m Dtarget. Figure 28 and Figure 29 show the comparison between the 35mm focal length and the 20mm focal length on the Sony SLT-A99 camera. Figure 28: Sony SLT-A99 35mm focal length (WADE_flight2014) Figure 29: Sony SLT-A99 20mm focal length (WADE_flight2014) The program user is able to view the options and select the flight plan that is of greatest benefit for their particular survey and accuracy requirements.
  • 39. 38 | P a g e 3.4 Estimating Total UAV Survey Time When planning to undertake a photogrammetric survey using a UAV drone it is important to consider the time it will take to complete it accurately and thoroughly so as to avoid returning to the field to salvage missed data. Time constraints can be very critical to how much area each flight can survey given a set requirement for accuracy as mentioned in the previous chapter. In the testing that was completed using the Vulcan Hexacopter drone at the University of Newcastle campus it was found that the specified flight time of 20-25 minutes by the manufacturer was the capability of the drone carrying zero payload. This is drastically diminished when the battery pack and camera/lens system is attached. It was estimated that the drone could sustain full flight for around 10-11 minutes with a relatively moderate weight DSLR camera (Canon 100D weight approx. 650g with lens). The testing of this drone was limited and therefore the actual fight times for a survey were not recorded during the writing of this thesis. It was however, determined that flight time of a survey is a major restriction when planning and needs to be closely investigated. The factors affecting fight time of a survey are number of images, timing of images and flight speed as a result. Once the number of images is determined for varying Dtarget values, it is then dependent on how often ach image will be taken as to how fast or slow the UAV will fly. Most late model DSLR cameras have the ability to take images at set time intervals (epochs) either through the hardware or an external software like an integrated flight system. The relationship between the timing of the image epochs and the flight speed is simply a function of the base distance Δx as defined in equation (9) . The following formula shows the relationship. (14) The program WADE_flight2014 allows the user to specify the image epochs depending on their particular hardware specifications. It will then produce the Maximum flight speed in ms-1 that will be required to sustain the overlaps and accuracies as determined in the previous chapters. As long as the UAV flies below the calculated speed, the overlap between image pairs and flight lines will be at least the required amount (80%). Most flight systems on UAV hexacopters that have been investigated have the ability to define a flight speed between waypoints. See Figure 30 for a screenshot of software DJI Ground Station 4.0 that came as a package with the A2 flight controlled for the purchased
  • 40. 39 | P a g e Vulcan Hexacopter. No testing has been completed yet as to how accurate this speed is or indeed how wind affects the UAV speed. Figure 30: Dji Ground Station 4.0 Waypoint editing The option to complete an adaptive bank turn has been selected to the next waypoint as this waypoint was at the end of a strip and it would turn to begin the next strip (this saves time in the air as opposed to the other option of ‘stop and turn’). Once the maximum speed has been determined for a set of waypoints, the total flight time is calculated by equation (15) (below). *note: Due to the maximum flight speed being calculated, the total flight time is a minimum time to survey the area with the specified requirements of accuracy etc. (15) Once simplified this becomes equation (16). (16) Where: Dx & Dy are the length and width respectively of the area to be surveyed. Figure 32 shows how the results are displayed in WADE_flight2014 for the inputs in Figure 31on the area of the Football field in Figure 22 on page 31.
  • 41. 40 | P a g e Figure 31: Input parameters for WADE_flight2014 Figure 32: WADE_flight2014 outputs for survey of football field using Canon EOS 6D As can be seen in Figure 32, there is a substantial difference in total flight time between close Dtargets and further Dtargets. If a sacrifice of 8mm in pixel size and 16mm in height accuracy is made, the survey could take around 27 minutes less time to complete. To compare cameras of similar specifications Figure 33 was created using Microsoft Excel. It shows how cameras with different specifications-if using the same lens focal length-the survey takes the same time to complete. The trade-off comes in the form of accuracy and pixel size/quality.
  • 42. 41 | P a g e Figure 33: Camera Comparisons (Wade, 2014) If flight time needs to be decreased it is recommended to use a shorter focal length lens as can be seen in Figure 33 with the Sony camera using a 20mm focal length. Using a shorter focal length lens on the same camera will not affect the depth accuracy of the model however it will slightly increase the pixel size resulting in a slightly worse resolution while decreasing the total flight time. The user needs to determine which of the requirements is the most demanding for the survey/client. 3.5 Camera Settings This chapter explores the internal mechanisms of the camera system, how certain camera settings can be understood and optimised in order to achieve the best results and how to improve the pixel accuracy to determine accuracies within the model as mentioned in chapter 3.3. Most people with a slight understanding of technology are able to point and shoot a camera and obtain an image of an area or object. It takes an understanding of how and why camera settings are altered to achieve the best possible image quality for a given set of circumstances. Photogrammetry is concerned with accuracy and quality data, it is reasonable to assume that a survey needs to have the most accurate and functional images to create the resulting model or output. 0 1 2 3 4 5 6 7 8 9 10 Canon 6D *24mm lens Nikon D800 *24mm lens Sony SLT A-99 *20mm lens Millimetres/minutes Camera Comparisons Flight time (minutes) Height accuracy (mm) Pixel size (mm) *flight time (blue) is shown in minutes. Values are shown for football field area (100m x 60m) with Dtarget = 20m
  • 43. 42 | P a g e The following three main functionalities of exposure need to be optimised to create quality images;  Shutter speed  Aperture  ISO sensitivity Figure 34: Camera Settings (Digital Camera World, 2012) These three settings work together in unison and usually by altering one, the others need to be adjusted also. 3.5.1 Camera Sensors Digital cameras record exposures via their image sensor which can be one of two types. Charge Coupled Device (CCD) sensors and Complementary metal Oxide (CMOS) sensors.
  • 44. 43 | P a g e Figure 35: CCD (left) and CMOS (right) image sensors Both sensors work by converting light into an electric charge and processing it into electric signals. Both were developed around the same time in the late 1960’s- 1970’s.with CMOS being slightly younger (Herd, 2013). The production methods of CMOS sensors are much cheaper and simpler than CCD sensors which have assisted in decreasing the pricing of cameras (Litwiller, 2001). 3.5.1.1 CCD In a CCD sensor every pixel’s charge is transferred through a very limited number of output nodes to be converted to voltage, buffered and sent out of the pixel as an analogue signal. This means that the entire pixel can be devoted to capturing light and the output is highly uniform resulting in a high image quality and less noise within the images. Because of the processes involved, these sensors use much more energy than their CMOS counterparts. The technology in CCD sensors was designed specifically for cameras as opposed to CMOS sensor technology which is also used in other microchips (Litwiller, 2001). CCD sensors are generally a lot larger and more suited to high end imaging applications. 3.5.1.2 CMOS The CMOS sensor was designed as a less power hungry alternative to CMOS with a variation in the way pixels handle information. In a CMOS sensor each pixel has its own charge-to-voltage conversions. The make-up of CMOS sensors has an effect on the light capturing ability of each pixel as much of the surrounding area is in use for processing (Litwiller, 2001). The non-uniformity of pixel processing can lead to a noisier image which is not desired for photogrammetry. The low power consumption
  • 45. 44 | P a g e and the fast processing speed however, do make this sensor useful for photogrammetric applications from a UAV. The majority of high end professional cameras now use CMOS sensors with very little differences to be found between image qualities of the two types. (Herd, 2013) Other functionalities and precautions are now used to reduce noise within images (some of these are discussed in the following chapters). 3.5.2 Shutter Speed The first setting that is influenced the most by the flight planning portion of a survey is the lens shutter speed. The camera shutter in a DSLR is located directly in front of the image sensor as shown in Figure 36: Canon 5D Internal mechanisms. Figure 36: Canon 5D Internal mechanisms (2000-2013 Little Guy Media, 2002) The shutter speed determines how long the image sensor is exposed to the light that is entering through the lens. In modern equipment this can range anywhere from several seconds to 1/16000 seconds. A correctly exposed image produces the best result and balance of natural light for the environment. The reason this is important when undertaking an aerial survey with a UAV is because there is movement involved. Fast shutter speeds are best to freeze a moving object to make it appear still and without blur. If the consideration is made that the UAV/camera is fixed in its space then the object/ground will be the moving target. Depending on how fast Image Sensor
  • 46. 45 | P a g e the target is moving, depends on how fast the shutter needs to close to avoid a blurry image. As a result of an effectively developed flight plan, the maximum speed of the UAV is known. To capture an image that is sharp, it is recommended that in the time it takes the image to be exposed, the object does not move more than 0.5-1 pixel (Gp). In WADE_flight2014 each minimum shutter speed is calculated depending on the Gp and the maximum flight speed using the following formula. (17) If the Gp and flight speed are known for a given Dtarget, focal length and camera Equation (17) can be simplified to; (18) This gives the shutter speed so that the UAV has only moved 0.5pixel by the time the image has been exposed. This can be modified to half the value if the UAV is allowed to move 1 pixel during exposure time. During the development of the program WADE_flight2014 it was discovered that due to an interesting relationship between the ground pixel size and flight speed at various flying heights, the shutter speed remained relatively constant as the Dtarget increased. Figure 37 shows the shutter speeds-as seconds-1 in the last column. These speeds are calculated for flying heights between 10m and 45m however the speed remains the same for any flying height (Dtarget). This was unforseen in the development of the program however it does simplify the planning process as the camera need only be set to a particular shutter speed once regardless of the height. Figure 37: WADE_flight2014 output showing all parameters
  • 47. 46 | P a g e 3.5.3 Aperture Aperture is an important function of a camera to understand in photogrammetry as it relates to the ‘depth of focus’ within an exposed image. As the importance is to have a much of the image in focus as possible, it is essential to understand how to manipulate the camera’s aperture settings to maximise this depth. Aperture or a camera’s ‘f-stop’ number refers to the opening through which light is allowed before it reaches the sensor or the shutter. Figure 38: Camera Aperture (ExposureGuide.com, 2014) As shown in Figure 38, the opening of the lens varies from small to large depending on the aperture setting. When light is forced to enter through a small space, the depth of field is greater than when the light enters through a larger opening. Aperture can vary from f/1.4 to f/32 with the in between settings (f/2, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22). Consider Figure 39 below; on the left side of the image the aperture setting is approximately f/22 and on the right side of the image it is f/2.8. Figure 39: Depth of field (ExposureGuide.com, 2014) The left side of the image is the best representation of the depth of field that is required for accurate 3D modelling with photogrammetry as the entire scene appears sharp and in focus and therefore it is found that a larger number such as f/22 is required to achieve this. Karara (1998) states, “It is advisable to use a reasonably small aperture to maintain depth of field and reduce coma, spherical aberration and,
  • 48. 47 | P a g e to a lesser extent astigmatism.” meaning that smaller aperture will reduce the error that causes the light rays through the lens to disperse before the image sensor. When selecting the best aperture for UAV photogrammetry it can be more complex than setting the f-stop to f/32 or f/16 however. Due to the nature of the setting and the restriction of light, the image still needs to be correctly exposed to give a quality image and pixel value. A direct relationship lies between the aperture setting and the camera’s shutter speed. When the light enters the lens through the aperture opening, the shutter needs to stay open long enough to allow sufficient light through to the image sensor. When a smaller opening is set (i.e. f/22) less light is allowed through the lens and as a result the shutter must stay open longer to allow the sensor to be exposed. When the aperture number is set to a large opening (i.e. f/2.8), the shutter must close quickly as the excess light entering the lens can overexpose the image resulting in a white scene. It is important to understand these relationships as due to the moving nature of UAV photogrammetry and minimum shutter speeds, certain aperture settings may not be possible which could affect the depth of field for the images. 3.5.4 ISO Sensitivity ISO setting can be manipulated to create a balance between aperture and shutter speed and maximise the depth of field while still maintaining the minimum required shutter speed during a UAV photogrammetric survey. ISO is the measure of light sensitivity of the image sensor. In a DSLR camera typically the ISO can be set to several settings including 100, 200, 400, 800, 1600 considered the ‘normal’ range. According to Nikon this range can go as low as 50 and as high as 204800 (Nikon USA, 2014). Figure 40: Changing ISO sensitivity (macrominded.com, 2014)
  • 49. 48 | P a g e The lower ISO numbers coincide with less sensitivity to light for the image sensor. Again due to the light required to have a balanced exposure, this may interfere with the shutter speed. Low sensitivity requires a slower shutter speed to allow sufficient light into the sensor. The simple answer seems to be to increase the sensitivity, have a small aperture opening and a fast shutter speed however increasing the ISO also comes with another complication. Grainy or ‘noisy’ images are often a product of a high ISO setting. The more sensitive a sensor is to the light, the grainier the image is that is produced and of lesser pixel quality/accuracy. Understanding the three principals of exposure will inevitably allow the photogrammetrist to manipulate the camera settings in order to produce the best exposures for the lighting available. New technologies and cameras are finding a balance between the principals of exposure and automating the process substantially which is making it easier to obtain accurate and balanced images for use in many applications including photogrammetry. It is important that each of the settings allows for the scene to be captured with enough detail to permit the 3D models and orthomosaic photos to be useable and detailed. The UAV flight planning program WADE_flight2014 assists in determining the shutter speed however the aperture and ISO settings cannot be calculated using mathematical procedures as the amount of light in the environment will vary from survey to survey and therefore they must be adjusted to suit 4 Conclusions and Recommendations The purpose of this thesis was to study the parameters involved in photogrammetry and discover the uses for UAV technology in combination with photogrammetry. To understand the geometric principals that form the algorithms and provide the basis for 3D scene reconstruction and modelling. This paper was able determine the accuracies required of measured auxiliary data in order to have accurate georeferenced scene models without the need for ground control points. Determine the Camera and image related aspects that have an effect on the overall model accuracy and seek to optimise these through use of a simple flight planning program for photogrammetric surveys. The use of UAV drone technology is apparent and gaining momentum year by year. The range of application is broad and the achievable accuracies under the right circumstances are able to rival even laser scanning. The benefits of a photogrammetric model are far greater than a laser scanned point cloud as for many purposes, image data is also a useful tool for investigation. The data is quickly and easily transformed from images to useable models using one of many SfM software packages available. The greatest benefit seems to be in areas where traditional survey techniques are not available or sites are unable to have human access or are too large to provide an economically beneficial alternative. These applications
  • 50. 49 | P a g e include mining, archaeological sites, environmental landforms such as glaciers/volcanoes, sacred or sensitive sites and hazardous areas. UAV photogrammetric surveys have been proven cost effective both in equipment and time-wise. The parameters investigated that provide the platform for photogrammetric modelling algorithms and measurements have been known for centuries. It appears that as technology advanced, so too did the understanding and application of mathematical formula which enhanced photogrammetry and its abilities. From the earliest image capturing techniques to the present various modes of data acquisition the basic fundamental principles of photogrammetry still play a vital role in how a scene is reconstructed and the measuring techniques used. The epi-polar lines and light rays first envisioned by Leonardo Da Vinci are still involved in the most advanced formulae. Through understanding how and why the photogrammetric parameters discussed in this thesis are involved, undertaking a photogrammetric survey one is able to ensure the most accurate results possible. The required accuracy of measuring auxiliary camera data in the form of the camera attitude and global position to produce survey quality models from a UAV without ground control points is still yet to be determined. Much research has been completed in the area however it seems to still be plagued by the lack of GNSS reliability. In order to integrate a system that is accurate and light enough for UAV use still further research will be required. It was hoped that testing would be completed to determine the absolute accuracy of the GPS positioning using the A2 flight controller system shown in Figure 41 Figure 41: A2 Flight Controller components Knowing these accuracies will assist in determining how accurate the measured auxiliary data can be however due to a number of circumstances this testing was not able to be completed during the completion of this thesis. Similar testing was completed in 2011 in Zurich Germany (Blaha, et al., 2011) where the flight system
  • 51. 50 | P a g e Falcon 8 was investigated with inconclusive data showing variations of up to 1.5m in position. The next major step forward in photogrammetry will be in the form of this integrated system and the ability for unmanned aircraft to collect data over any area and have it produced as an accurate georeferenced model simply through knowing the location and orientation parameters of the camera. Until the previously mentioned technological advancements occur, in order to optimise photogrammetric surveys the effect of the image qualities must be limited. To ensure the data that is being collected in the form of images is as accurate as possible the program WADE_flight2014 was developed with a successful test range of several cameras and survey areas. The image considerations within the camera such as internal camera calibrations, shutter speed, aperture, ISO sensitivity, focal length and sensor details were all investigated. Formulas were derived for the use in WADE_flight2014 and provide a way of planning a photogrammetric survey to optimise time spent acquiring data and determine achievable accuracies. The use of such a program when integrated with UAV flight systems would be economically beneficial for any person hoping to undertake a survey. The program is currently in the process of developing a text output file to provide coordinates for required camera locations during flight to streamline the assimilation with pre-existing flight planning software. The program is also able to be used for flight other than pointing perpendicular to the ground as the principals do not change. It could be applied to a long wall mine to measure geological formations or to plan a survey to model the exterior of a large building. Principals presented in this thesis should be understood by any photogrammetrist in order to produce accurate and reliable results systematically. Even though it is quickly becoming very simple for anyone to produce some form of three dimensional reconstruction, it still takes a knowledgeable person to perform a photogrammetric survey accurately and repeatedly. 4.1 Further Research Listed below are topics of research that would improve the understanding on photogrammetry and UAV applications as well as GPS and INS systems.  The Benefits and differences between terrestrial (random coverage) style photogrammetry and planned aerial imaging with calculated overlapping data.
  • 52. 51 | P a g e  A comparison of accuracies and applications of both full frame sensor cameras and cropped frame sensors.  Comparison of Fixed focal length lens systems verses Variable focal length lenses for use in UAV photogrammetry.  The accuracies and integration of GNSS/INS systems on lightweight UAVS for the purpose of photogrammetric application.  The accuracy of inbuilt internal camera orientation sensors and GPS.
  • 53. 52 | P a g e 5 Works Cited 2000-2013 Little Guy Media, 2002. The Canon EOS. [Online] Available at: http://www.robgalbraith.com/content_pagefcb5.html?cid=7-4806-4822 [Accessed 5 December 2014]. Ackerman, F., 1984. Utilization of Navigation Data for Aerial Triangulation. International Archives of Photogrammetry and Remote sensing, 25(A3a). AdamTechnology, 2012. ADAM Technology Team Blog. [Online] Available at: http://www.adamtech.com.au/ [Accessed July 2014]. American Society for Photogrammetry and Remote Sensing, 2006. Manual of Photogrammetry. Fifth ed. s.l.:s.n. Balletti, C., Guerra, F., Tsioukas, V. & Vernier, P., 2014. Calibration of Action Cameras for Photogrammetric Purposes. Sensors, 18 September.pp. 17471-17490. Blaha, M., Eisenbeiss, H., Grimm, D. & Limpach, P., 2011. Direct Georeferencing of UAVS. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVIII(1), pp. 1-6. Center for Photogrammetric Training, 2008. History of Photogrammetry. [Online] Available at: https://spatial.curtin.edu.au/local/docs/HistoryOfPhotogrammetry.pdf [Accessed 01 07 2014]. Chiang, K.-W., Tsai, M.-L. & Chu, C.-H., 2012. The Development of an UAV Borne Direct Georeferenced Photogrammetric Platform for Ground Control Free Applications. Sensors, 4 July.pp. 9161-9180. Clayton, A. I., 2012. Remot sensing of subglacial bedforms from the British Ice Sheet using an Unmanned Aerial System (UAS: Problems and Potential, Durham University: Durham Theses. Diefenbach, A. K., Dzurisin, D., Crider, J. G. & Schilling, S. P., 2006. Photogrammetric Analysis of the Current Dome-Building Eruption of Mount St. Helens Volcano. American Geophysical Union, Fall Meeting 2006, abstract #G53A-0870, December, p. A870. Digital Camera World, 2012. 5 must-have menue tweaks for Canon users. [Online] Available at: http://www.digitalcameraworld.com/2012/07/10/5-must-
  • 54. 53 | P a g e have-menu-tweaks-for-canon-users/ [Accessed 12 October 2014]. Digital Photography Review, 1998-2014. DP Review. [Online] Available at: www.dpreview.com [Accessed May-December 2014]. Digital Photography School, 2006-2014. Is Full Frame Still the Best?. [Online] Available at: http://digital-photography-school.com/full-frame-still-best/ [Accessed May-December 2014]. Doyle, F., 1964. The Historical Development of Analytical Photogrammetry. 2 ed. s.l.:s.n. Eisenbeis, H., 2009. UAV Photogrammetry, Dresden: ETH Zurich. Erica Nocerino, Fabio Mnna, Fabio Remondino, Renato Saleri, 2013. Accuracy and Block Defrmation AnaysAutatic UAV and Terrestrial Photogrammetry. ISPRS Annals of the Photogrammetry, Remoe Sensing and Spatial Information Sciences, 2-6 September, II-5/W1(XXIV), pp. 203-208. ExposureGuide.com, 2014. Focusing Basics. [Online] Available at: http://www.exposureguide.com/focusing-basics.htm [Accessed June-December 2014]. ExposureGuide.com, 2014. Lens basics; Understanding Camera Lenses. [Online] Available at: www.exposureguide.com/lens-basics.htm [Accessed 20 December 2014]. G.Forlani, L. Pinto, R. Roncella, D. Pagliari, 2013. Terrsetrial photogrammetry without ground control points. Earth Science Information, 127(1), pp. 1- 11. Ganjanakhundee, S., 2013. German map expert who surveyed Preah Vihear in 1961 returns. [Online] Available at: http://www.nationmultimedia.com/national/German-map- expert-who-surveyed-Preah-Vihear-in-196-30204411.html [Accessed 9 October 2014]. Geodetic Systems Inc., 2014. What is Photogrammetry-The basics of Photogrammetry. [Online] Available at: http://www.geodetic.com/v-stars/what-is- photogrammetry.aspx [Accessed May-December 2014]. GPS photography.com, 2012. Digital SLR Trends In Military Photography. [Online] Available at: http://www.gpsphotography.com/digital-slr-trends-in-
  • 55. 54 | P a g e military-photography [Accessed 8 December 2014]. Hartley, R. a. Z. A., 2004. Multiple View Geometry in Computer Vision. 2nd ed. Cambridge: Cambridge University Press. Herd, J., 2013. 3D Reconstruction from Video Data and Laser Scanning Technology, Newcastle: Herd, James. Hutton, J. et al., 2014. DMS-UAV Accuracy Assessment: AP20 With Nikon D800E, s.l.: Applanix Corporation. J.Skaloud, M.Cramer, K.P.Schwarz, 1996. Exterior Orientation By Direct Measurement Of Camera Position And Attitude. International Archives of Photogrammetry and Remote sensing, XXXI(B3), pp. 125-130. Karara, H., 1998. Appendix - Camera Calibration. In: H. Karara, ed. Non- Topographic Photogrammetry. s.l.:the American Society for Photogrammetry and Remote Sensing, pp. 62-80. Kniest, E., 2013. Stereophotogrammetry. s.l.:s.n. Litwiller, D., 2001. CCD vs. CMOS: Facts and Fiction. Photonics Spectra, January, pp. 1-4. M.J. Westboy, J. Brasington, N.F Glasser, J. Hambrey, J.M. Reynolds, 2012. Structure from Motion photogrammetry: A low cost, effective tool for geoscience applications. Geomorphology, September, 179(1), pp. 300- 314. macrominded.com, 2014. Understanding ISO Settings and Sensitivity. [Online] Available at: http://www.macrominded.com/iso-settings.html [Accessed 10 December 2014]. Michael Gruber, Roland Perko, Matin Ponticelli, 2012. THE ALL DIGITAL PHOTOGRAMMETRIC WORKFLOW: REDUNDANCY AND ROBUSTNESS. [Online] Available at: http://www.isprs.org/proceedings/XXXV/congress/comm1/papers/43.pdf [Accessed June-December 2014]. NERC Science of The Environment, 2010. Intruments Wild RC 10. [Online] Available at: http://arsf.nerc.ac.uk/instruments/rc- 10.asp?cookieConsent=A [Accessed 15 October 2014].
  • 56. 55 | P a g e Nikon Corporation, 2014. DSLR Camera Basics-shutter speed. [Online] Available at: http://imaging.nikon.com/lineup/dslr/basics/04/03.htm [Accessed June-December 2014]. Nikon USA, 2014. Understanding ISO sensitivity. [Online] Available at: http://www.nikonusa.com/en/Learn-And- Explore/Article/g9mqnyb1/understanding-iso-sensitivity.html [Accessed June-December 2014]. Osborne, M., 2013. A Comparison Between Multi-View 3D Reconstruction and Laser Scanning Technology, Newcsatle: Osborne, Matthew. Philipson & Philpot, 2012. Remote Sensing Fundamentals. [Online] Available at: http://ceeserver.cee.cornell.edu/wdp2/cee6100/6100_monograph/mono_ 07_F12_photogrammetry.pdf [Accessed 2014]. Raizman, Y., 2012. Flight Planning and Orthophotos; Leaning Instead of Overlap. GIM International, June.pp. 35-38. Stojakovic, V., 2008. Terrestrial Photogrammetry and Application to Modelling Architectual Objects. Architecture and Civil Engineering , 6(1), pp. 113- 125. University of Newcastle, 2013. Stereo Photogrammetry. In: Photogrammetry. Newcastle: Faculty of Engineering and Built Environment. UVS, i., 2014. UVSI SESAR Proposal. [Online] Available at: https://www.uavs.org/sesarju [Accessed 2014]. Zisserman, A. a. H. R., 1999. Multiple view Geometry. [Online] Available at: http://users.cecs.anu.edu.au/~hartley/Papers/CVPR99- tutorial/tutorial.pdf [Accessed Auguest-December 2014].
  • 57. 56 | P a g e 6 Appendix A 6.1 USER GUIDE WADE_flight2014 To begin planning a flight for a photogrammetric survey, ensure that the user has the following information: Camera Type Camera Sensor Size Calibrated focal length of lens Camera Effective pixels Camera resolution (max resolution to be used i.e. for Nikon D800 is 7360 x 4912) 6.1.1 To Begin Open the Excel program Wade_fight2014 Check to see if the selected camera is already in the drop down menu If camera does not appear then input the required details in the cells shown. If information is not known use the link shown in red text to research the specifications.
  • 58. 57 | P a g e Once all Information is complete click the button to load the camera into the program. The camera will then be available to select in the drop down list. Select the camera and then complete the next step. The user will be required to specify a lens with calibrated focal length that will be used with the new camera. This is simply input into the cell shown below and by clicking the button. 6.1.2 Defining the survey Area In order for the program to calculate a flight plan, the survey area must be defined. This is done through the following steps. If the Dx and Dy of the area are known, they may be input directly into the cells shown below. Dx and Dy are shown in the figure of the map area below and refer to the largest distances in both the x and y direction or (Longitude and Latitude) between points within the survey area. If the Dx and Dy are not known however, select the Coordinate information tab at the bottom of the worksheet. North
  • 59. 58 | P a g e Using a program such as GoogleTM Earth, locate the area to be surveyed and select the corner boundaries in the North West, North East, South East and South West corners so that the survey area is encompassed. Input these details into the appropriate fields. 6.1.2.1 If using GoogleTM Earth The coordinates will be in the form of decimal degrees and may be input directly into the following cells as appropriate. 6.1.2.2 If coordinates are in degrees minutes seconds Input the degrees minutes and seconds into their respective fields as shown below. The program will then calculate and convert these into decimal degrees and input them into the above cells. Additionally the user may specify a home point and input these coordinates into the cell shown below. Once all of the Coordinate information has been input, the program will show the area to be surveyed graphically in relation to longitude and latitude lines and outline the area with a rectangular box Shown below:
  • 60. 59 | P a g e The program has calculated the Dx and Dy and displays them in the cells below. In order to have these values input into the Flight planning parameters they can be either manually input or by pressing the button this will be automatically updated. 6.1.3 Selecting appropriate Flight plan Once Dx and Dy have been input, the user is required to select from a drop down list, the image epochs in seconds. (Time between each exposure) *this may be determined by the Camera options. After this has been selected, the same is done with the minimum flying height. This is a desired minimum flying height and it can be altered after the flight plan has been generated. This appears as a drop down list and the user may select and value in 10m increments from 10 to 200m.
  • 61. 60 | P a g e The flight plan has been calculated and shows several options related to different flying heights. These begin at the minimum flying height selected and increment up in 5m increments to give 8 possible solutions. From this point, the user is able to alter the Photograph epochs, minimum flying height and if available, the lens to be used (changing focal length). This will update the table shown above and allow the user to make an informed decision based on all of the available information and what is required for the particular survey. *the coordinate text file output of the system is still under construction and will be available early 2015. This will provide a file in the form of longitude, latitude and height for UAV flight as selected by the user. The coordinates will be calculated off the users selected flying height flight plan and will be available in .txt format to be integrated into most flight system software.
  • 62. 61 | P a g e 7 Appendix B 7.1 Cameras used in Comparisons Nikon D800 Effective Pixels 36 Megapixels Max Resolution 7360 x 4912 Sensor Type CMOS Sensor Size Full Frame (35.9 x 24mm) Minimum Shutter Speed 30 sec Maximum Shutter Speed 1/8000 sec Aperture Priority Yes Shutter Priority Yes Manual Exposure Mode Yes Self-Timer Yes (2 to 20 seconds, exposures 0.5,1,2, or 3 seconds) HDMI connectivity Yes (HDMI mini) Remote Control Yes (wireless or wired) Orientation Sensor Yes GPS Yes Weight (Inc. batteries) 1000g (Digital Photography Review, 1998-2014) Canon EOS 6D Effective Pixels 20 Megapixels Max Resolution 5472 x 3648 Sensor Type CMOS Sensor Size Full Frame (36 x 24mm) Minimum Shutter Speed 30 sec Maximum Shutter Speed 1/4000 sec Aperture Priority Yes Shutter Priority Yes Manual Exposure Mode Yes Self-Timer Yes (2 or 10 seconds) HDMI connectivity Yes (HDMI mini) Remote Control Yes (wireless or wired)
  • 63. 62 | P a g e Orientation Sensor Yes GPS Yes (built in) Weight (Inc. batteries) 770g (Digital Photography Review, 1998-2014) Sony SLT-A99 Effective Pixels 24 Megapixels Max Resolution 6000 x 4000 Sensor Type CMOS Sensor Size Full Frame (35.9 x 24mm) Minimum Shutter Speed 30 sec Maximum Shutter Speed 1/8000 sec Aperture Priority Yes Shutter Priority Yes Manual Exposure Mode Yes Self-Timer Yes (2 or 10 seconds) HDMI connectivity Yes (mini HDMI type c) Remote Control Yes (wireless or wired) Orientation Sensor Yes GPS Yes (built in) Weight (Inc. batteries) 812g (Digital Photography Review, 1998-2014)