Weitere ähnliche Inhalte
Ähnlich wie Super resolution imaging using frequency wavelets and three dimensional
Ähnlich wie Super resolution imaging using frequency wavelets and three dimensional (20)
Mehr von IAEME Publication
Mehr von IAEME Publication (20)
Kürzlich hochgeladen (20)
Super resolution imaging using frequency wavelets and three dimensional
- 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
570
SUPER RESOLUTION IMAGING USING FREQUENCY WAVELETS
AND THREE DIMENSIONAL VIEWS BY HOLOGRAPHIC
TECHNIQUE
K. MATHEW
Karpagam University, Coimbatore, Tamilnadu-641021, India
S. SHIBU
K.R.Gouri Amma College Of Engineering For Women, Thuravoor, Cherthala, Kerala
SYNOPSIS
Super resolution imaging is achieved from low resolution images. This approach has
three steps, registration, interpolation and reconstruction. In registration the specimen under
observation is photographed using digital camera with holographic equipment. Several such
photographs are taken so that the relative displacement of any such image is a sub pixel shift.
These images are superposed in the same coordinate system. This step is registration. The
second step is interpolation by frequency wavelet method. In this interpolation process, we
collect high frequency information and resolution is being increased. Final stage of super
resolution procedure is reconstruction. In reconstruction super resolution image is restored by
minimising degradation in image due to aliasing effect and blur due to noise. Thus final
image has large resolving power with excellent clarity and has a three dimensional view.
Key Words: Hologram, Bi cubic interpolation, contrast ratio, frequency wavelets, three
dimensional views.
INTRODUCTION
The requisites of an ideal image are very high resolution, good clarity, and three
dimensional views. These qualities of an image are essential for precise analysis and such
images have wide application in military, medical field, remote sensing and in consumer
electronics,
Resolution implies that different parts of a sample are separately seen. The resolving
power by an optical device is its power to see two nearly separate objects as separate. When a
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING
& TECHNOLOGY (IJCET)
ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)
Volume 4, Issue 3, May-June (2013), pp. 570-578
© IAEME: www.iaeme.com/ijcet.asp
Journal Impact Factor (2013): 6.1302 (Calculated by GISI)
www.jifactor.com
IJCET
© I A E M E
- 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
571
point object is viewed by an optical device owing to diffractioneffect, itappears to have a
central bright spot surrounded by concentric subsidiary minima and maxima. Owing to this
diffraction pattern, the image will be blurred.When we observe two nearby objects, they may
not be seen as separate objects, but may be seen as a single object and so these nearby objects
are not being resolved. According to Raleigh’s criterion of resolution(1)
, two nearby point
objects are being resolved, if the central spot of one image lies on or outside the first
subsidiary minimum of the other object. Using this principle, it is obvious that the resolving
power of an optical device is proportional to
ఒ
, a being the aperture of the optical device,
ߣwavelength of the light used. This diffraction limit of resolution was recognized by Abbe(2)
in 19th
century and it is due to diffraction that a point source of light when imaged through an
optional device appears as a spot with finite size. This intensity profile defines a point spread
function and fullwith at half maximum of the PSF in the x – y direction (lateral direction) and
the axial z direction is given approximately as ∆x, ∆y ൌ
ఒ
ଶ ሺேሻ
ܽ݊݀ ∆ݖ ൌ
ଶఒఓ
ଶ ሺேሻ
, ߣ –
wavelength ߤ – refractive index, NA numerical aperture of the objectives lens. The resolving
power is inversely proportional to this full width at half maximum. Hence the resolving
power can be increased either by increasing the aperture of the device or by decreasing the
wavelength of light used.
The imaging by optical devices is diffraction limited. But in digital imaging process,
there are various methods for increasing resolutions. Suitable algorithms are available to
obtain super resolution from low resolution images. One such method is interpolation by
frequency wavelet method. The resolving power can be increased by adding high frequency
information of specific image model and also by removing the ambiguity in the image due to
sub pixel shift, blur due to defocus and degradation due to aliasing.
Beside the three dimensional images of specific resolution, we are interested in the
clarity of the image. The image is blurred due to diffraction effects and owing to various
other factors like defocus etc. the clarity of a particular part of the image is decided by
contrast factor or modulation factor defined by ܿ ൌ
ூ௫ିூ
ூ௫ାூ
, ݔܽ݉ܫ is the maximum
intensity and Imin is the minimum intensity. The contrasting factor increases with frequency of
observing signal. Since magnifications is same at a given frequency we get distortion less
magnification and the image can be a true replica of the specimen. So we have a suitable
algorithm for achieving three dimensional images of high resolution, good clarity and
distortion magnification.Holography – record of phase variation
When we are photograph an object using traditional means by light field, we get point
by point record of square of the amplitude. The light reflecting of the specimen carries with
the information of irradiance and it does not describe the phase of the emanated wave from
the object. If the amplitude and phase of the emanated wave can be reconstructed, the
resulting light field would form an image perfectly three dimensional exactly as if the object
were before you.One such method is used in phase contrast microscope and another such
device is holographic imaging designed by Dennis Gaber. He photographically recorded an
interference pattern generated by interaction of scattered monochromatic light beam from an
object and a coherent reference beam. The record of the resulting pattern is a hologram and
the reconstruction of image can be formed by diffraction of the coherent beam by the
hologram. This image is digitalized and stored as a matrix of binary digits in computer
memory, so the pixels, besides the record of sequence of amplitude, will carry depth
information because the phase charge of diffracted beam from the hologram is proportional to
the depth of scattered light from the specimen.
- 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
572
When we photograph a point object, because it is imaged as a smear of light described
by a point spread function s(x, y). Under incoherent illumination, these elementary flux
density patterns overlap and add linearly to create final image. An object is a collection of
point sources, each of which imaged by a spread function. The object plane wave front is
composed of different Fourier component plane waves travelling in direction associated with
spatial frequencies of object light field reflected or transmitted. Each one of the Fourier plane
waves interfere with reference waves and scattered objects wavelets comingat angle, the
relative phase of the waves varies from point to point and can be expressed as
ଶగ௫ௌ
ఒ
, If two
such waves have amplitude E0 the resulting field has an amplitude E = 2E0cos
ଵ
ଶ
sin (wt –
kx-
ଵ
ଶ
) and irradiance distribution is given by 2Cא0 ܧ0
2
cosଶ
ሺ
ଵ
ଶ
ሻ or Cא0
ܧ0
2
ሺ1 ܿݏ ሻ א0being permittivity of free space. Hence have a cosinusoidal distribution
across film plane. When monochromatic beam is diffracted from the above hologram, the
energy beam has intensity of illumination proportional is I (x, y) ER (x, y) where ER (x, y) is
the reconstructing wave incident on the hologram. Then ER (x, y) = EORcos ሾ2ߨߥݐ
ሺ,ݔ ݕሻሿEORin the amplitude of the reconstructing wave. Hence the final wave (EF (x, y) =
ଵ
ଶ
EOR (EOB
2
+ EOO
2
) cos ሾ2ߨߥݐ ሺ,ݔ ݕሻሿ+ EOR EOBEOOcos ሺ2ߨߥݐ 2 െ ) +
ଵ
ଶ
EOR
EOBEOOሾcos 2ߨߥݐ ሺ,ݔ ݕሻሿ , 0 – phase from the object, EOR – amplitude of the reference
beam EOO – amplitude of scattered wave from the object. The final wave consist of three
parts
(1.)The amplitude modulated version of the reconstructing wave. This is zeroeth order
of the undeflected direct beam.
(2.)The sum term having the same amplitude proportional is the objective wave EOO
and phase contribution 2 ሺ,ݔ ݕሻ arising from the tilting background and
reconstructing wave front at the plane of hologram and it also contain the phase of
the object and the phase is a measure of the depth at the position of object.
(3.)The difference term. This term except for the multiplication constant has precisely
the form of the EOO (x, y) with the actual phase of the object. This difference wave
represents the scene or object exactly as it is. This phase dependent image is
digitalized to get low resolution image. So the pixels of these low resolution
image has depth information
Registration of the low resolution images – Determination of shift due to plane motion
and rotation in frequency domain
When a series of low resolution images (magnified images) are taken in a short
interval of time, there is a relative displacement between the images owing to the small
camera motion. The motion(3)
can be described in terms of three parameters namely a
horizontal shift ‘h’ vertical shift ‘ℓ’ rotation denoted byangle of rotation ‘ߠ’ about z axis. The
relative displacement of input image at sub pixel accuracy can be precisely determined. The
reference signal is denoted by f (x, y).Owing to the effects ofhorizontal displacement ‘h’
vertical displacement ‘ℓ’ rotation angle ‘ߠ’, resulting images can be expressed as f1
(x, y) = f
(ݔ ܿݏ θ െ ݕ ݊݅ݏ θ ݄, ݕ ܿݏ ߠ ݔ ݊݅ݏ θ ℓ) expressing ܿݏ θ and ݊݅ݏ θ as ܿݏ θ ൌ 1 െ
θమ
ଶ!
…… and ݊݅ݏ θ ൌ B െ
θయ
ଷ!
……, f1
(x, y) can be expanded as f1
(x, y) = f(x, y) + ቀ݄ െ
ݕ θ െ
୶θమ
ଶ
ቁ ୢ
ୢ୶
ቀℓ ݔ θ
୷θమ
ଶ
ቁ
ୢ
ୢ୷
+ ………. The error function between f1
and f is given as
- 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
573
square of difference between expanded form of f1
(x, y), and f1
(x, y) ie E = ∑ | ݂ሺ,ݔ ݕሻ
ቀ݄ െ ߠݕ െ
୶θమ
ଶ
ቁ
ௗ
ௗ௫
ቀ݈ െ ߠݔ െ
௬θమ
ଶ
ቁ
ௗ
ௗ௬
െ ݂ଵሺ,ݔ ݕሻ|ଶ
, E being error function.
The summation is over the over lapping regions of f (x, y) and f 1
(x, y). Since we require this
mean square value of error to be minimum we set
ௗா
ௗ
ൌ 0,
ௗா
ௗ
ൌ 0 ܽ݊݀
ௗா
ௗఏ
ൌ 0
Ie, ∑ ቀ
ௗ
ௗ௫
ቁ
ଶ
݄ ∑ ቀ
ௗ
ௗ௫
ቁ ቀ
ௗ
ௗ௬
ቁ
ℓ
∑ ቀݔ
ௗ
ௗ௬
െ ݕ
ௗ
ௗ௫
ቁ
ௗ
ௗ௫
ߠ ൌ
ௗ
ௗ௫
ሺ݂ଶ
െ ݂ሻ
∑
ௗ
ௗ௫
ௗ
ௗ௬
݄ ∑ ቀ
ௗ
ௗ௬
ቁ
ଶ
ℓ ∑ ቀݔ
ௗ
ௗ௬
െ ݕ
ௗ
ௗ௫
ቁ
ௗ
ௗ௬
ߠ ൌ
ௗ௧
ௗ௬
ሺ݂ଵ
െ ݂ሻ
∑ ቀݔ
ௗ
ௗ௬
െ ݕ
ௗ
ௗ௫
ቁ
ௗ
ௗ௫
݄ ∑ ቀݔ
ௗ
ௗ௬
െ ݕ
ௗ
ௗ௫
ቁ
ௗ
ௗ௬
ℓ ∑ ቀݔ
ௗ
ௗ௬
െ ݕ
ௗ
ௗ௫
ቁ
ଶ
ߠ ൌ ቀݔ
ௗ
ௗ௬
െ
ݕ
ௗ
ௗ௫
ቁ ሺ݂ଵ
െ ݂ሻ
The motion parameter can be computed by solving above set of linear equation.
The relative displacements of under sampled LR images can be estimated with sub
pixel accuracy. Then we can combine these LR images.(4)
The figure 1 shows three 4x4 pixel
LR frames on an 8x8 HR grid. Each symbol (Square, Circle, and Triangle) shows the
sampling points of a frame with respect to HR grid. One arbitrary frame is selected as
reference frame. This frame is marked by circular symbols. The sampling grid for the
triangular frame is a simple translation of the reference grid. The sampling grid for square
frame has translational, rotation and magnification components.
Thus we have regularity in the grid of LR sampling points for super resolution. When
the pixel values from all the frames are considered, the data are irregularly sampled. In fact
for each frame of low resolution images data points are sampled on a rectangular grid. But in
the high resolution grid we have irregular sampling and this is called interlaced sampling.
- 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
574
Interpolation and reconstruction of super resolution images
Super resolution imaging from the low resolution image can be achieved by any one
of the following methods.
(1) Polynomial based image interpolation (5, 6)
The process of image interpolation aims at estimating intermediate pixels between the
known pixel values. To estimate the intermediate pixel x, the neighboring pixels and the
distance s are incorporated in the estimation process. The interpolation function݂መ(x) can be
written as ݂መ(x) = ∑ ܥן
ୀିן k ߚ(x – xk), ߚሺݔሻis the interpolation kernel, x and xk represent the
continuous and discrete spatial distance Ck interpolation constant. If ݂መሺݔሻ is band limited to (-
n, n) then ݂መሺݔሻ ൌ ∑ ݂ሺݔሻ sin ܿ ሺݔ െ ݔሻ. Since the above interpolation formula is practical
due to slow rate of decay of the interpolation kernel, are prefer to use approximation such as
bi cubic interpolation formula related as
݂መሺݔሻ ൌ ݂ሺݔିଵሻ ൭
െܵ3 2ܵ2 െ ܵ
2
൱ ݂ሺݔሻ ൭
3ܵ3 െ 5ܵ2 2
2
൱
݂ሺݔାଵሻ ൭
െܵ3 4ܵ2 െ ܵ
2
൱ ݂ሺݔାଶሻ ൭
ܵ3 െ ܵ2
2
൱
In this case of bicubic interpolation, the sample points are used for evaluating
interpolation coefficient for image interpolation and this technique is performed row by row
and then columns by columns.
(2) Regularised Image Interpolation (7, 8)
The image interpolation problem for captured digital image is an inverse problem.
Generally the super resolution image reconstruction approach is an ill posed problem
because of the insufficient number of L.R. images and ill conditioned blur operations. The
imaging process can be expected asݕୀݓ݊ ݊ for k=1, 2, ……………. pand matrix Wk
is of size (N1 N2)2
L1 N1 L2 N2represents the degradation by blurring motion and
subsampling. x denotes HR pixels, yk denotes LR pixels. Based on the above observation
model, the aim of SR image reconstruction is to estimate HR images x from LR images yk
for k= 1, 2, ……. P. Knowing the registration parameter the above observation model is
completely specified. The deterministic regularized SR approach solves the inverse
problem of finding x using prior information about the solution which can be used to make
the problems well posed. Using constrained least squares we can choose a suitable form of
x to minimizethe lagrangiao∑ିଵ,
||ݕ െ ܹܺ ||ଶ
ߙ ||ܿ||ݔଶ
where c is high pass
filter||. || ן represents the lagrangian multiplier known as regularising parameter. Larger
the value of ,ן we get smoother solutions and we can find a unique estimate x which
minimize the above cost function. One of the most basic deterministic iterative techniques
is used for solving the equation.∑ୀଵ
ൣܹ
்
ܹ ן ܥ்
൧ݔො ൌ ∑ୀଵ
ܹ
்
ݕ and this leads to
the following iteration for ݔො, ݔො ାଵ
ൌ ݔො ߚ ሾ∑ୀଵ
ܹ
் ሺݕ െܹ ݔො
ሻെן ܥ்ሺݔොሻሿWhere
ߚ denotes the convergence parameter and ܹ
்
is the unsampling operator and a type of blur
operator The above method is the generalized interpolation approach.
- 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
575
(3) Interpolation by frequency wavelet technique
Super resolution image from low resolution image can be obtained by frequency
wavelet methods(9)
, in the wavelet method signals can be decomposed into components at
different scales or resolutions. The advantage of this method is that signals trends at different
scales can be isolated and studied. Global trends can be examined at coarse scales using
scalar functions ሺݐሻ where local variations are better analysed at fine scales. A brief
summary of orthonormal wavelet analysis of 1D and 2D signals are presented here. for a
detailed study one can refer to the basic theory of wavelet presented by Mallet in his
article,(10,11)
‘Wavelength bases’. The high frequency coefficient in the wavelet expansion are
estimated by using the sample points of LR images in high resolution grid and then HR image
is obtained by applying the wavelet transforms.
From a function f (t) א L2
(R), the projection fj (t) off(t) on to the subspace vj represents
an approximation of the function at scale j.߶ሺݐሻ is known as scaling function. The
approximation becomes more accurate as j increases. The difference in successive
approximation is a detail signal ߰(t) that spans a wavelet subspace Wj. So we can decompose
the approximation space
VJ+1 = VJْWJand we can expand the function f (t) א L2
(R) as
f(t) = ∑א௭ ܽ, , ሺݐሻ ∑ஹ∑ܾ,߰ሺݐሻ, where
aJ,k = ݂ሺݐሻ, ሺݐሻ݀ݐ and
bj,k = ݂ሺݐሻ߰, ሺݐሻ݀ݐ (A)
This wavelet decomposition can be extended to 2D images, then
Vj
(2)
= Vj
(1)
ٔV
j
(1)
and scaling function
ሺ,ݐ ݏሻ ൌ ሺݐሻ ሺݏሻ and the functions
݂ሺ,ݐ ݏሻ ൌ ∑,ℓא௭ܽ,, ,, ሺ௧,௦ሻ ∑ஹ∑,ℓא௭ܾ,,ℓ
߰,,ℓ
ሺ,ݐ ݏሻ with
ܽ,, ൌ ඵ ݂ሺ,ݐ ݏሻ,,ሺ,ݐ ݏሻ݀ݐ ݀ݏ
ܾ,,ℓ
ൌ ݂ሺ,ݐ ݏሻ߰,,ℓ
ሺ,ݐ ݏሻ݀ݐ ݀ݏ (B)
ܾ,,ℓ
௨
ൌ ඵ ݂ሺ,ݐ ݏሻ߰,,ℓ
௨ ሺ,ݐ ݏሻ݀ݐ ݀ݏ
ܾ,,ℓ
ௗ
ൌ ඵ ݂ሺ,ݐ ݏሻ߰,,ℓ
ௗ ሺ,ݐ ݏሻ݀ݐ ݀ݏ
The expansion formulas (A) and (B) are used to estimate the wavelet coefficient.
Using this estimates we interpolatethe function values at the HR grid points. We consider the
case of non-uniformly sampled ID signals. Suppose we have a function f (t) for which we
compute M uniformly distributed values say at t=0, 1,…….. M-1. We are givenߚ non-
uniformly sampled data points of, f (t) at t = to, t1, ……… tp-1, so that 0 ݐ ൏ .ܯ We take
unit time spacing grid to be resolution level V0. By repeated application, we decompose
ܸ ൌ ܸ ∑ ܹ
ିଵ
ୀ whereܬ െ1. So we can separate ݂ሺݐሻ א ܸin to approximation
components and details components. So ݂ሺݐሻ is expanded as ݂ሺݐሻ ൌ ∑ܽ, ,
ሺ௧ሻ
∑ୀ
ିଵ
∑ܾ, ߰ ,ሺሻ substitutingthe values of sampled data, we get a set of p linear
equations. The desired values of f (t) at the HR grid points can be computed by using the
estimated coefficient.
- 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
576
݂ሺݐሻ ൌ ∑ ܽ,א௦ ߶,ሺሻ ∑ ܾ,א௦ ߰,ሺሻfor wavelet super resolution, the data is
sampled non-uniformly and in a recurring manner. This type of sampling is called interlaced
sampling.
A common feature of wavelets SR reconstruction is the assumption that LR images to
be enhanced are the corresponding low passes filtered sub bands of decimated wavelets
transformed for HR images. So in all wavelets SR reconstruction methods (12)
,one estimate
the high pass filtered sub bands of a DWT (Decimated Wavelet Transformed) for HR images
and construct HR image by inverse wavelets transform. As a simple approach to the HR
image reconstruction, we set all elements of these high pass bands to zero. This methods is
called wavelet domain zero padding. The wavelet coefficients can be estimated by using
Laplacian pyramid.
Pyramid data structure used for image decomposition and reconstruction based
Laplacian pyramid is shown in the figure.
As in the figure ‘a’, go is an image and g1 is the resulting image of applying an
appropriate low pass filter and down sampling to go. The prediction error L0 is given by L0 =
g0 - upsampling g1, g1 is low pass filtered and down sampled to get g2 and the second error L1
= g1 - upsampling g2. By repeating several times, we obtain Lkwhere Lk = gk - upsampling
gk+1 In his implementation each is smaller than its predecessor by a scale factor ½ due to
reduced sample density. This is called lapacian pyramid. The low pass filters g0, g1, ………
form a Gaussian pyramid.
The pyramid reconstruction is shown in the figure ‘b’. The assigned image can be
recovered exactly by L0, L1, …... LnAnother efficient procedure is to up sample Lnonce and
add to Ln-1 This process is repeated until we get level o, and g0 is recovered. This is inverse
laplacian pyramid generation. But our aim is to get higher resolution than g0. Suppose the
predictive high resolution image is g-1. Then g-1 = L1 + up sampling g0.If we look on L0, L1,
…... Ln as a new image pyramid we may obtain a residential pyramid L0, L1, …... Ln by using
same method as that of obtaining L0, L1, …... Ln via the process of pyramid reconstruction L-1
is obtained by the following process.Up sampling Ln-1 and adding to Ln-2up sample the new
image and add to Ln-3. By repeating this we get L-1By this method of pyramid structure we get
high resolution.
- 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
577
Experimental Result
In our technique of super resolution imaging, we use digital camera having
holographic device. We have low resolution images whose pixel carry depth information. The
relative displacement between LR images is calculated precisely upto sub pixel shift. The
pixels of LR images are marked in HR grid. This is called interlacing sampling. The details of
our technique are shown in the following block diagram.
This second step of our method is interpolation by frequency wavelet. At different
levels resolution is improved by collecting high frequency information. The imaging function
is expanded in terms of frequency wavelets. The coefficients in wavelet expansion is
calculated with the help of irregular sample points in the high resolution grid. The resulting
image is having high resolution. But the image is blurred owing to degradation caused by
aliasing and defocus. This error is eliminated by techniques of the least square value of the
error and we get output image.
Ia Ib
The fig Ia shows the image obtained using nearest neighbor interpolation fig. Ib
shows super resolution image having high clarity obtained using frequency wavelet
technique.
- 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-
6367(Print), ISSN 0976 – 6375(Online) Volume 4, Issue 3, May – June (2013), © IAEME
578
IIa IIb
The fig. IIb shows final output image with depth information. The figure IIa shows
input image after 10 interactions so our technique, we produce super resolution images of
high clarityin there dimension.
REFERENCE
1. Eugene Hecht ‘optics’ Pearson Education, Inc pages 613 – 645
2. Marx Born & E. Wolf, principles of optics pergamon press Ltd.
3. Michal Irani and Samuel Peleg Improving resolution by image registration graphical models and
image processing Vol 53 No3 Pages 231 – 239, 1991
4. Nhat Nauyen and Peyman Milanfar wavelet super resolution Circuit system Signal process Vol.
19 No. 4. 2000 Pages 321 – 338
5. Sky M K Kinley and Megan Levine “cubic Spline Interpolation – Math 45 Linear Algebra
6. Robert G. Keys “cubic convolution Interpolation for digital Image processing – IEEE transaction
acoustics speech and signal processing vol 55p-29 No – 6. 1981 Dec Pages, 1153 – 1160
7. Deepu Rajan and Subhasis Chaudhari “A General interpolation Scheme for image scaling and,
super resolution in proc of Erlangen workshop 99 on vision modeling and visualization university
of Erlangen Germany No 1999 pages 301 – 308
8. Meingnet “multivariate interpolation at arbitary points made simple’ Journal of applied math phys
Vol 30 pages 292 – 304, 1979
9. Mathew K. Dr. Shibu “wavelet based technique for super resolution images reconstraction
International Journels of computer application volumes 33 – No 7 November 2011 pages 11 – 17
10. Mallat S “multi resolution approximation and wavelet orthonormal bases of Trans American
society Vol 315 Pages 68 – 87
11. Mallat S. A wavelet tour of signal processing academic press San Siego Calif.
12. H. C. Liu, Y Feng and GY Sun wavelet domain image super resolution reconstruction based on
images pyramid and cycle spinning Journal of physics conference services 48 (2006) pages 417 –
421. International symposium of instrumentation science and technologies.
13. Zhao, Hua Han and Sulong peng “wavelet domains HMT based image super resolution
proceeding of international conference. Pages 953 - 956
14. Benjamin Langamann, Kalaus Hartmann and orman Loffeld comparison of depth super resolution
methods for 2D/3D images. International Journal of computer Information systems and Industrial
Management application ISSN 2150 – 7988 Vol 3 (2011) Pages 635 – 345.
15. Vinod Karar and SmarajitGhosh, “Effect of Varying Contrast Ratio and Brightness
Nonuniformity Over Human Attention and Tunneling Aspects in Aviation”, International Journal
of Electronics and Communication Engineering &Technology (IJECET), Volume 3, Issue 2,
2012, pp. 400 - 412, ISSN Print: 0976- 6464, ISSN Online: 0976 –6472.