Weitere ähnliche Inhalte
Ähnlich wie Low-Rank Neighbor Embedding for Single Image Super-Resolution (20)
Kürzlich hochgeladen (20)
Low-Rank Neighbor Embedding for Single Image Super-Resolution
- 1. www.projectsatbangalore.com 09591912372
IEEE SIGNAL PROCESSING LETTERS, VOL. 21, NO. 1, JANUARY 2014 79
Low-Rank Neighbor Embedding for
Single Image Super-Resolution
Xiaoxuan Chen and Chun Qi, Member, IEEE
Abstract—This letter proposes a novel single image super-res-
olution (SR) method based on the low-rank matrix recovery
(LRMR) and neighbor embedding (NE). LRMR is used to ex-
plore the underlying structures of subspaces spanned by similar
patches. Specifically, the training patches are first divided into
groups. Then the LRMR technique is utilized to learn the latent
structure of each group. The NE algorithm is performed on the
learnt low-rank components of HR and LR patches to produce SR
results. Experimental results suggest that our approach can recon-
struct high quality images both quantitatively and perceptually.
Index Terms—Low-rank matrix recovery, neighbor embedding,
super-resolution.
I. INTRODUCTION
HIGH-RESOLUTION (HR) images are needed in many
practical applications [1]. Super-resolution (SR) image
reconstruction is a software technique to generate a HR image
from multiple input low-resolution (LR) images or a single LR
image. In recent years, the learning-based SR methods have re-
ceived a lot of attentions and many methods have been devel-
oped [2], [3], [4], [5]. They place focus on the training exam-
ples, with the help of which a HR image is generated from a
single LR input. Freeman et al. [2] utilized a Markov network to
model the relationships between LR and HR patches to perform
SR. Inspired by the locally linear embedding (LLE) approach
in manifold learning, Chang et al. [3] proposed the neighbor
embedding (NE) algorithm. It assumes that the two manifolds
constructed by the LR and HR patches respectively have similar
local structures and a HR patch can be reconstructed by a linear
combination of its neighbors. Li et al. [4] proposed to project
pairs of LR and HR patches from the original manifolds into a
common manifold with a manifold regularization procedure for
face image SR. For generic image SR, Gao et al. [5] proposed
a joint learning method via a coupled constraint.
In learning-based methods, how to utilize the training set is
very crucial. Patches are various in appearance. Thus it is nec-
Manuscript received August 17, 2013; accepted October 04, 2013. Date of
publication October 18, 2013; date of current version November 22, 2013.
This work was supported by the National Natural Science Foundation of China
under Grant 60972124, the National High-tech Research and Development
Program of China (863 Program) under Grant 2009AA01Z321, and by the
Specialized Research Fund for the Doctoral Program of Higher Education
under Grant 20110201110012. The associate editor coordinating the review
of this manuscript and approving it for publication was Prof. Gustavo Kunde
Rohde.
The authors are with the Department of Information and communication En-
gineering, Xi’an Jiaotong University, Xi’an, Shaanxi 710049, China (e-mail:
dada.yuasi@stu.xjtu.edu.cn, qichun@mail.xjtu.edu.cn).
Color versions of one or more of the figures in this paper are available online
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/LSP.2013.2286417
Fig. 1. (a) Distributions of standard correlation coefficients between recon-
struction weights of pairs of LR and HR patches for the NE algorithm and
LRMR respectively. (b) Performance results of average PSNR for the ten test
images with different values of patch size and overlap.
essary to divide the whole training set into groups by certain
strategies [4], [5] such that patches in each group are highly re-
lated. Therefore, the subspace spanned by them is low-dimen-
sional. However, how to learn the low-dimensional structure of
such a subspace is also a challenge. In this letter, we employ a
robust PCA approach, the low-rank matrix recovery (LRMR)
[6], to learn the underlying structures of subspaces. LRMR has
been successfully applied to various applications, such as face
recognition [7] and background subtraction [8]. Given a data
matrix whose columns come from the same pattern, these
columns are linearly correlated in many situations and the ma-
trix should be approximately low-rank. However, the data
may be influenced by noise in practical applications. LRMR can
decompose such a data matrix into the sum of a low-rank
matrix and a sparse error matrix . is the low-rank approx-
imation of and has the capability of describing the under-
lying structure of the subspace spanned by vectors in [6]. The
columns in are more correlated with each other than they are
in .
According to the NE assumption, the reconstruction weights
of one LR patch should be extremely similar with those of its
HR counterpart. Unfortunately, it is not always the case due
to the one-to-many mappings from LR to HR patches [4]. In
this letter, we overcome this problem by using LRMR since the
linear correlation relationship of patches is enhanced through
LRMR and thus the local structure of manifold constructed by
LR or HR patches is more compact. The NE assumption that the
manifolds of LR and HR patches have similar local structures
is more satisfied after LRMR procedure. In Fig. 1(a), we draw
distributions of the standard correlation coefficients between the
reconstruction weights of pairs of LR and HR patches [4] for the
original NE algorithm and the LRMR method respectively. It is
shown that the reconstruction weights of LR and HR patches for
LRMR are more consistent with the NE assumption than those
of the original NE algorithm, which means the LRMR proce-
1070-9908 © 2013 IEEE