SlideShare ist ein Scribd-Unternehmen logo
1 von 43
Downloaden Sie, um offline zu lesen
Presentation of paper #7:

Nonlinear component
analysis as a kernel
eigenvalue problem
Scholkopf, Smola, Muller
Neural Computation 10, 1299-1319, MIT Press (1998)



                                                                  Group C:
              M. Filannino, G. Rates, U. Sandouk
COMP61021: Modelling and Visualization of high-dimensional data
Introduction
● Kernel Principal Component Analysis (KPCA)
  ○ KPCA is an extension of Principal Component Analysis
  ○ It computes PCA into a new feature space dimension
  ○ Useful for feature extraction, dimensionality reduction
Introduction
● Kernel Principal Component Analysis (KPCA)
  ○ KPCA is an extension of Principal Component Analysis
  ○ It computes PCA into a new feature space
  ○ Useful for feature extraction, dimensionality reduction
Motivation: possible solutions
Principal Curves

Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American
Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516.

●   Optimization (including the quality of data approximation)

●   Natural geometric meaning

●   Natural projection




              http://pisuerga.inf.ubu.es/cgosorio/Visualization/imgs/review3_html_m20a05243.png
Motivation: possible solutions
Autoencoders

Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of
data with neural networks. Science, 313, 504--507.

●   Feed forward neural network

●   Approximate the identity

    function




     http://www.nlpca.de/fig_NLPCA_bottleneck_autoassociative_autoencoder_neural_network.png
Motivation: some new problems



● Low input dimensions

● Problem dependant

● Hard optimization problems
Motivation: kernel trick
KPCA captures the overall variance of patterns
Motivation: kernel trick
Motivation: kernel trick
Motivation: kernel trick
Motivation: kernel trick




                Video
Principle
                                             Data




                                  Features
"We are not interested in PCs
in the input space, we are
interested in PCs of features
that are nonlinearly related to
the original ones"
Principle
                                                 Data




"We are not interested in PCs




                                  New features
in the input space, we are
interested in PCs of features
that are nonlinearly related to
the original ones"
Principle
Given a data set of N centered observations in a d-dimensional space



●   PCA diagonalizes the covariance matrix:




●   It is necessary to solve the following system of equations:




●   We can define the same computation in another dot product space F:
Principle
Given a data set of N centered observations in a high-dimensional space



●   Covariance matrix in new space:




●   Again, it is necessary to solve the following system of equations:



●   This means that:
Principle
●   Combining the last tree equations, we obtain:




●   we define a new function



●   and a new N x N matrix:



●   our equation becomes:
Principle
●   let λ1 ≤ λ2 ≤ ... ≤ λN denote the eigenvalues of K, and α1, ..., αN the
    corresponding eigenvectors, with λp being the first nonzero eigenvalue
    then we require they are normalized in F:




●   Encoding a data point y means computing:
Algorithm


● Centralization
  For a given data set, subtracting the mean for all the observation to
  achieve the centralized data in RN.
● Finding principal components
  Compute the matrix                      using kernel function, find
  eigenvectors   and eigenvalues
● Encoding training/testing data
                       where x is a vector that encodes the training
  data. This can be done since we calculated eigenvalues and
  eigenvectors.
Algorithm
● Reconstructing training data
  The operation cannot be done because eigenvectors do not have
  a pre-images in the original dimension.
● Reconstructing test data point
  The operation cannot be done because eigenvectors do not have
  a pre-images in the original dimension.
Disadvantages
● Centering in original space does not mean centering in F, we need
  to adjust the K matrix as follows:

● KPCA is now a parametric technique:
  ○ choice of a proper kernel function
     ■ Gaussian, sigmoid, polynomial
  ○ Mercer's theorem
     ■ k(x,y) must be continue, simmetric, and semi-defined positive
            (xTAx ≥ 0)
        ■   it guarantees that there are non-zero eigenvalues
● Data reconstruction is not possible, unless using approximation
   formula:
Advantages


●   Time complexity

    ○   we will return to this point later

●   Handle non linearly separable problems

●   Extraction of more principal components than PCA

    ○   Feature extraction vs. dimensionality reduction
Experiments

●   Applications
●   Data Sets
●   Methods compared
●   Assessment
●   Experiments
●   Results
Applications
●   Clustering
    ○   Density Estimation
        ■ ex High correlation between features
    ○   De-noising
        ■ ex Lighting removing from bright images
    ○   Compression
        ■ ex Image compression

●   Classification
    ○   ex categorisations
Datasets
Experiment Name                 Created by                          Representation
                                                   y x2 C             y= x2
●   Simple 1+2 = 3       Uniform distribution      C noise sd 0.1
                                                                      - Unlabelled
    example1             Dist [-1, 1]
                                                                      - 2 Dimensions

                                                                       Three clusters
               1+2 = 3
                         Three Gaussians                               - Unlabelled
●   Simple
                         sd = 0.1                                      - 2 Dimensions
    example2             Dist [1,1] x [0.5, 1]
    Kernels
                                                                      A circle and square
                         The eleven gaussians                         - Unlabelled
●   De-noising
                         Dist [-1, 1] with zero mean                  - 10 Dimensions


●   USPS                                                               Hand written digit
    Character                                                          -Labelled
    Recognition                                                        -256 Dimensions
                                                                       -9298 Digits
Experiments
1 Simple Example 1 experiment
Dataset : 1+ 2 = 3 The uniform dist sd = 0.2
Kernel: Polynomial 1 – 4

2 USPS Character Recognition                                            Parameters
Dataset:        USPS                                                    Kernel PCA
                                                                        Kernel Polynomial 1 7
                                                                        Components 32 2048 (x x2)
Methods
 Five layer Neural Networks Kernel SVM PCA               SVM            Neural Networks and SVM
                                                                        The best parameters for the task
3 De- noising
                                                                        Parameters
Dataset:              De-noising 11 gaussians sd = 0.1
                                                                        The best parameters for the task

Methods

  Kernel Autoencoders       Principal Curves Kernel PCA    Linear PCA

4 Kernels                                                               Parameters
                                                                        The best parameters for the task
Radial Basis Function
Sigmoid
Methods
    These are the methods we used in the experiments
                                                                Dimensionality
                                                                reduction

Classification
                 ●   Supervised            Unsupervised
                                            Linear PCA     Linear

                 Neural Networks        Kernel PCA
                 ● SVM                     Kernel Autoencoders Linear
                                                            Non
                 ● Kernel LDA              Principal Curves
Face
Recognition
Assessment
●   1 Accuracy
      Classification: Exact Classification
      Clustering:     Comparable to other clusters

●
●   2 Time Complexity
●   The time to compute
●
●   3 Storage Complexity
●   The storage of the data
●
●   4 Interpretability
●   How easy it is to understand
Simple Example
 ●       Recreated example                     ●   Nonlinear PCA paper ex
 Dataset:      The USPS Handwritten digits     Dataset: 1+ 2 =3 The uniform dist with sd 0.2
 Training set: 3000
                                               Classifier: The polynomial Kernel 1 - 4
 Classifier: The SVM dot product Kernel 1 -7
 PC: 32 – 2048 x2                              PC: 1 – 3



                                       The
                                       eigenvector
3D                                     1 -3 of
                                       highest
by a                                   eigenvalue
Kernel

Do
PCA
                                                              Kernel Polynomial 1 -4
                                      Accurate
2D                                                           The function y = x2 + B
                                      Clustering
                                      of Non                 with noise B of sd= 0.2
                                      linear                 from uniform distribution
                                      features               [-1, 1]
Character recognition
     Dataset: The USPS Handwritten digits
     Training set: 3000
     Classifier: The SVM dot product Kernel 1 -7
     PC: 32 – 2048 (x x2)
●   The performance is better
    for Linear Classifier
    trained on non linear
    components than linear
    components

●   The performance is
    improved from linear as
    the number of component
    is increased                Fig The result of the Character Recognition experiment ( )
De-noising
  Dataset:           The De-noising eleven gaussians
  Training set: 100
  Classifier: The Gaussian Kernel sd parameter
  PC: 2

The de-noising on non linear feature of the distribution




                 Fig The result of the denoising experiment ( )
Kernels
 The choice of Kernel regulates the accuracy of the algorithm and is dependent on the
 application. The Mercer Kernels Gram Matrix are




Experiments

Radial Basis Function
Dataset Three gaussian sd 0.1
Classifier y exp x y 0.1 Kernel 1 4
PC 1 8

Sigmoid
Dataset Three Gaussian sd 0.1
Classifier Kernel
PC 1 3
Results                  -The PC 1-2 separate the 3 clusters
RBF
                         - The PC of 3 -5 half the clusters
 PC 1     PC 2    PC 3
 PC 4
                         -The PC of 6-8 split them
                         orthogonally
 PC 5     PC 6    PC 7
 PC8
                         The clusters are split to 12 places.
Sigmoid
                         -The PC 1 -2 separates the 3
                         clusters

                         - The PC 3 half the 3 clusters

                         -The same no of PC’s to separate
   PC 1     PC2            clusters.
  PC3                    - The Sigmoid needs < PC to half.
Results
                      Experiment 1    Experiment 2   Experiment 3   Experiment 4

 1 Accuracy

  Kernel              Polynomial 4    Polynomial 4   Gaussian 0.2   Sigmoid

  Components          8 Split to 12   512            2              3 split to 6

  Accuracy                            4.4

 2 Time




 3 Space



 4 Interpretability

                      Very Good       Very Good      Complicated    Very good
Discussions: KDA
Kernel Fisher Discriminant (KDA)

Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf
, Klaus-Robert Müller

● Best discriminant projection




http://lh3.ggpht.com/_qIDcOEX659I/S14l1wmtv6I/AAAAAAAAAxE/3G9kOsTt0VM/s1600-h/kda62.png
Discussions
Doing PCA in F rather in Rd

●   The first k principal components carry more variance than any

    other k directions

●   The mean squared error observed by the first k principles is

    minimal
● The principal components are uncorrelated
Discussions
Going into a higher dimensionality for a lower
dimensionality

● Pick the right high dimensionality space



Need of a proper kernel

● What kernel to use?
   ○ Gaussian, sigmoidal, polynomial
● Problem dependent
Discussions
Time Complexity

● Alot of features (alot of dimensions).
● KPCA works!
   ○ Subspace of F (only the observed x's)
   ○ No dot product calculation
● Computational complexity is hardly changed by the fact that we
   need to evaluate kernel function rather than just dot products
   ○ (if the kernel is easy to compute)
   ○ e.g. Polynomial Kernels
                   Payback: using linear classifier.
Discussions
Pre-image reconstruction maybe impossible

Approximation can be done in F

Need explicite ϕ

● Regression learning problem
● Non-linear optimization problem
● Algebric Solution (rarely)
Discussions
Interpretablity

● Cross-Features Features

   ○   Dependent on the kernel




● Reduced Space Features

   ○   Preserves the highest variance

       among data in F.
Conclusions
Applications

●   Feature Extraction (Classification)

●   Clustering

●   Denoising

●   Novelty detection

●   Dimensionality Reduction (Compression)
References
[1] J.T. Kwok and I.W. Tsang, “The Pre-Image Problem in Kernel Methods,”
IEEE Trans. Neural Networks, vol. 15, no. 6, pp. 1517-1525, 2004.
[2] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of
data with neural networks. Science, 313, 504-507.
[3] Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf ,
Klaus-Robert Müller
[4] Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American
Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516.
[5] G. Moser, "Analisi delle componenti principali", Tecniche di trasformazione
di spazi vettoriali per analisi statistica multi-dimensionale.
[6] I.T. Jolliffe, "Principal component analysis", Spriger-Verlag, 2002.
[7] Wikipedia, "Kernel Principal Component Analysis", 2011.
[8] A. Ghodsi, "Data visualization", 2006.
[9] B. Scholkopf, S. Mika, A. Smola, G. Ratsch, and K.R. Muller, "Kernel PCA
pattern reconstruction via approximate pre-images". In Proceedings of the 8th
International Conference on Artificial Neural Networks, pages 147 - 152, 1998.
References


[10] J.T.Kwok, I.W.Tsang, "The pre-image problem in kernel methods",
Proceedings of the Twentieth International Conference on Machine Learning
(ICML-2003), 2003.

●   K-R, Müller, S, Mika, G, Rätsch, K,Tsuda, and B, Schölkopf “An
    Introduction to Kernel-Based Learning Algorithms” IEEE
    TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH
    2001
●   S, Mika, B, Schölkopf, A, Smola Klaus-Robert M¨uller, M,Scholz, G, Rätsch
    “Kernel PCA and De-Noising in Feature Spaces”
Thank you

Weitere ähnliche Inhalte

Was ist angesagt?

Curse of dimensionality
Curse of dimensionalityCurse of dimensionality
Curse of dimensionality
Nikhil Sharma
 

Was ist angesagt? (20)

Density Based Clustering
Density Based ClusteringDensity Based Clustering
Density Based Clustering
 
Fishers linear discriminant for dimensionality reduction.
Fishers linear discriminant for dimensionality reduction.Fishers linear discriminant for dimensionality reduction.
Fishers linear discriminant for dimensionality reduction.
 
Hierarchical Clustering
Hierarchical ClusteringHierarchical Clustering
Hierarchical Clustering
 
Density based clustering
Density based clusteringDensity based clustering
Density based clustering
 
KNN
KNN KNN
KNN
 
Lect5 principal component analysis
Lect5 principal component analysisLect5 principal component analysis
Lect5 principal component analysis
 
Curse of dimensionality
Curse of dimensionalityCurse of dimensionality
Curse of dimensionality
 
Decision trees in Machine Learning
Decision trees in Machine Learning Decision trees in Machine Learning
Decision trees in Machine Learning
 
Nonlinear dimension reduction
Nonlinear dimension reductionNonlinear dimension reduction
Nonlinear dimension reduction
 
Introduction to Machine Learning Classifiers
Introduction to Machine Learning ClassifiersIntroduction to Machine Learning Classifiers
Introduction to Machine Learning Classifiers
 
Genetic Algorithm
Genetic AlgorithmGenetic Algorithm
Genetic Algorithm
 
Machine Learning: Bias and Variance Trade-off
Machine Learning: Bias and Variance Trade-offMachine Learning: Bias and Variance Trade-off
Machine Learning: Bias and Variance Trade-off
 
K nearest neighbor
K nearest neighborK nearest neighbor
K nearest neighbor
 
Machine Learning With Logistic Regression
Machine Learning  With Logistic RegressionMachine Learning  With Logistic Regression
Machine Learning With Logistic Regression
 
K - Nearest neighbor ( KNN )
K - Nearest neighbor  ( KNN )K - Nearest neighbor  ( KNN )
K - Nearest neighbor ( KNN )
 
Pca(principal components analysis)
Pca(principal components analysis)Pca(principal components analysis)
Pca(principal components analysis)
 
Random forest
Random forestRandom forest
Random forest
 
Nonnegative Matrix Factorization
Nonnegative Matrix FactorizationNonnegative Matrix Factorization
Nonnegative Matrix Factorization
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
K mean-clustering algorithm
K mean-clustering algorithmK mean-clustering algorithm
K mean-clustering algorithm
 

Andere mochten auch

Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...
zukun
 
fauvel_igarss.pdf
fauvel_igarss.pdffauvel_igarss.pdf
fauvel_igarss.pdf
grssieee
 
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfKernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
grssieee
 
KPCA_Survey_Report
KPCA_Survey_ReportKPCA_Survey_Report
KPCA_Survey_Report
Randy Salm
 
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
hanshang
 
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfExplicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
grssieee
 
Pca and kpca of ecg signal
Pca and kpca of ecg signalPca and kpca of ecg signal
Pca and kpca of ecg signal
es712
 
Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...
zukun
 

Andere mochten auch (20)

Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...Principal component analysis and matrix factorizations for learning (part 2) ...
Principal component analysis and matrix factorizations for learning (part 2) ...
 
fauvel_igarss.pdf
fauvel_igarss.pdffauvel_igarss.pdf
fauvel_igarss.pdf
 
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdfKernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
Kernel Entropy Component Analysis in Remote Sensing Data Clustering.pdf
 
Different kind of distance and Statistical Distance
Different kind of distance and Statistical DistanceDifferent kind of distance and Statistical Distance
Different kind of distance and Statistical Distance
 
KPCA_Survey_Report
KPCA_Survey_ReportKPCA_Survey_Report
KPCA_Survey_Report
 
Principal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty DetectionPrincipal Component Analysis For Novelty Detection
Principal Component Analysis For Novelty Detection
 
Adaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and mergingAdaptive anomaly detection with kernel eigenspace splitting and merging
Adaptive anomaly detection with kernel eigenspace splitting and merging
 
Analyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving itAnalyzing Kernel Security and Approaches for Improving it
Analyzing Kernel Security and Approaches for Improving it
 
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
Modeling and forecasting age-specific mortality: Lee-Carter method vs. Functi...
 
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdfExplicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
Explicit Signal to Noise Ratio in Reproducing Kernel Hilbert Spaces.pdf
 
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
A Comparative Study between ICA (Independent Component Analysis) and PCA (Pri...
 
Regularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial DataRegularized Principal Component Analysis for Spatial Data
Regularized Principal Component Analysis for Spatial Data
 
Pca and kpca of ecg signal
Pca and kpca of ecg signalPca and kpca of ecg signal
Pca and kpca of ecg signal
 
Probabilistic PCA, EM, and more
Probabilistic PCA, EM, and moreProbabilistic PCA, EM, and more
Probabilistic PCA, EM, and more
 
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at GoogleDataEngConf: Feature Extraction: Modern Questions and Challenges at Google
DataEngConf: Feature Extraction: Modern Questions and Challenges at Google
 
Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...Principal component analysis and matrix factorizations for learning (part 1) ...
Principal component analysis and matrix factorizations for learning (part 1) ...
 
Principal Component Analysis and Clustering
Principal Component Analysis and ClusteringPrincipal Component Analysis and Clustering
Principal Component Analysis and Clustering
 
ECG: Indication and Interpretation
ECG: Indication and InterpretationECG: Indication and Interpretation
ECG: Indication and Interpretation
 
Introduction to Statistical Machine Learning
Introduction to Statistical Machine LearningIntroduction to Statistical Machine Learning
Introduction to Statistical Machine Learning
 
Principal component analysis
Principal component analysisPrincipal component analysis
Principal component analysis
 

Ähnlich wie Nonlinear component analysis as a kernel eigenvalue problem

Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors
지현 백
 
TunUp final presentation
TunUp final presentationTunUp final presentation
TunUp final presentation
Gianmario Spacagna
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors
지현 백
 
Hadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using HadoopHadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using Hadoop
Yahoo Developer Network
 
Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1
khairulhuda242
 

Ähnlich wie Nonlinear component analysis as a kernel eigenvalue problem (20)

Anomaly detection using deep one class classifier
Anomaly detection using deep one class classifierAnomaly detection using deep one class classifier
Anomaly detection using deep one class classifier
 
Eye deep
Eye deepEye deep
Eye deep
 
Neighborhood Component Analysis 20071108
Neighborhood Component Analysis 20071108Neighborhood Component Analysis 20071108
Neighborhood Component Analysis 20071108
 
Deep learning with TensorFlow
Deep learning with TensorFlowDeep learning with TensorFlow
Deep learning with TensorFlow
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors
 
TunUp final presentation
TunUp final presentationTunUp final presentation
TunUp final presentation
 
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reductionAaa ped-17-Unsupervised Learning: Dimensionality reduction
Aaa ped-17-Unsupervised Learning: Dimensionality reduction
 
Single shot multiboxdetectors
Single shot multiboxdetectorsSingle shot multiboxdetectors
Single shot multiboxdetectors
 
[ppt]
[ppt][ppt]
[ppt]
 
Introduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep LearningIntroduction to Neural Networks and Deep Learning
Introduction to Neural Networks and Deep Learning
 
Hadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using HadoopHadoop Summit 2010 Machine Learning Using Hadoop
Hadoop Summit 2010 Machine Learning Using Hadoop
 
Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]Structured Forests for Fast Edge Detection [Paper Presentation]
Structured Forests for Fast Edge Detection [Paper Presentation]
 
Introduction to deep learning in python and Matlab
Introduction to deep learning in python and MatlabIntroduction to deep learning in python and Matlab
Introduction to deep learning in python and Matlab
 
Deep learning and image analytics using Python by Dr Sanparit
Deep learning and image analytics using Python by Dr SanparitDeep learning and image analytics using Python by Dr Sanparit
Deep learning and image analytics using Python by Dr Sanparit
 
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
Deep Neural Networks (D1L2 Insight@DCU Machine Learning Workshop 2017)
 
Fa18_P2.pptx
Fa18_P2.pptxFa18_P2.pptx
Fa18_P2.pptx
 
Deep Learning Tutorial
Deep Learning Tutorial Deep Learning Tutorial
Deep Learning Tutorial
 
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
Analysis of Feature Selection Algorithms (Branch & Bound and Beam search)
 
13Kernel_Machines.pptx
13Kernel_Machines.pptx13Kernel_Machines.pptx
13Kernel_Machines.pptx
 
Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1Week 12 Dimensionality Reduction Bagian 1
Week 12 Dimensionality Reduction Bagian 1
 

Mehr von Michele Filannino

Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...
Michele Filannino
 
Temporal expressions identification in biomedical texts
Temporal expressions identification in biomedical textsTemporal expressions identification in biomedical texts
Temporal expressions identification in biomedical texts
Michele Filannino
 
Algoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web ServiceAlgoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web Service
Michele Filannino
 
SWOP project and META software
SWOP project and META softwareSWOP project and META software
SWOP project and META software
Michele Filannino
 
Semantic Web Service Annotation
Semantic Web Service AnnotationSemantic Web Service Annotation
Semantic Web Service Annotation
Michele Filannino
 
Orchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPMOrchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPM
Michele Filannino
 
Modulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender SystemModulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender System
Michele Filannino
 
Serendipity module in Item Recommender System
Serendipity module in Item Recommender SystemSerendipity module in Item Recommender System
Serendipity module in Item Recommender System
Michele Filannino
 

Mehr von Michele Filannino (17)

me_t3_october
me_t3_octoberme_t3_october
me_t3_october
 
Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...Using machine learning to predict temporal orientation of search engines’ que...
Using machine learning to predict temporal orientation of search engines’ que...
 
Temporal information extraction in the general and clinical domain
Temporal information extraction in the general and clinical domainTemporal information extraction in the general and clinical domain
Temporal information extraction in the general and clinical domain
 
Mining temporal footprints from Wikipedia
Mining temporal footprints from WikipediaMining temporal footprints from Wikipedia
Mining temporal footprints from Wikipedia
 
Can computers understand time?
Can computers understand time?Can computers understand time?
Can computers understand time?
 
Detecting novel associations in large data sets
Detecting novel associations in large data setsDetecting novel associations in large data sets
Detecting novel associations in large data sets
 
Temporal expressions identification in biomedical texts
Temporal expressions identification in biomedical textsTemporal expressions identification in biomedical texts
Temporal expressions identification in biomedical texts
 
My research taster project
My research taster projectMy research taster project
My research taster project
 
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
Sviluppo di un algoritmo di similarità a supporto dell'annotazione semantica ...
 
Tecniche fuzzy per l'elaborazione del linguaggio naturale
Tecniche fuzzy per l'elaborazione del linguaggio naturaleTecniche fuzzy per l'elaborazione del linguaggio naturale
Tecniche fuzzy per l'elaborazione del linguaggio naturale
 
Algoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web ServiceAlgoritmo di text-similarity per l'annotazione semantica di Web Service
Algoritmo di text-similarity per l'annotazione semantica di Web Service
 
SWOP project and META software
SWOP project and META softwareSWOP project and META software
SWOP project and META software
 
Semantic Web Service Annotation
Semantic Web Service AnnotationSemantic Web Service Annotation
Semantic Web Service Annotation
 
Orchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPMOrchestrazione delle risorse umane nel BPM
Orchestrazione delle risorse umane nel BPM
 
Modulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender SystemModulo di serendipità in un Item Recommender System
Modulo di serendipità in un Item Recommender System
 
Serendipity module in Item Recommender System
Serendipity module in Item Recommender SystemSerendipity module in Item Recommender System
Serendipity module in Item Recommender System
 
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
Orchestrazione di risorse umane nel BPM: Gestione dinamica feature-based dell...
 

Kürzlich hochgeladen

Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
Joaquim Jorge
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
vu2urc
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Kürzlich hochgeladen (20)

04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...Driving Behavioral Change for Information Management through Data-Driven Gree...
Driving Behavioral Change for Information Management through Data-Driven Gree...
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 

Nonlinear component analysis as a kernel eigenvalue problem

  • 1. Presentation of paper #7: Nonlinear component analysis as a kernel eigenvalue problem Scholkopf, Smola, Muller Neural Computation 10, 1299-1319, MIT Press (1998) Group C: M. Filannino, G. Rates, U. Sandouk COMP61021: Modelling and Visualization of high-dimensional data
  • 2. Introduction ● Kernel Principal Component Analysis (KPCA) ○ KPCA is an extension of Principal Component Analysis ○ It computes PCA into a new feature space dimension ○ Useful for feature extraction, dimensionality reduction
  • 3. Introduction ● Kernel Principal Component Analysis (KPCA) ○ KPCA is an extension of Principal Component Analysis ○ It computes PCA into a new feature space ○ Useful for feature extraction, dimensionality reduction
  • 4. Motivation: possible solutions Principal Curves Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516. ● Optimization (including the quality of data approximation) ● Natural geometric meaning ● Natural projection http://pisuerga.inf.ubu.es/cgosorio/Visualization/imgs/review3_html_m20a05243.png
  • 5. Motivation: possible solutions Autoencoders Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504--507. ● Feed forward neural network ● Approximate the identity function http://www.nlpca.de/fig_NLPCA_bottleneck_autoassociative_autoencoder_neural_network.png
  • 6. Motivation: some new problems ● Low input dimensions ● Problem dependant ● Hard optimization problems
  • 7. Motivation: kernel trick KPCA captures the overall variance of patterns
  • 12. Principle Data Features "We are not interested in PCs in the input space, we are interested in PCs of features that are nonlinearly related to the original ones"
  • 13. Principle Data "We are not interested in PCs New features in the input space, we are interested in PCs of features that are nonlinearly related to the original ones"
  • 14. Principle Given a data set of N centered observations in a d-dimensional space ● PCA diagonalizes the covariance matrix: ● It is necessary to solve the following system of equations: ● We can define the same computation in another dot product space F:
  • 15. Principle Given a data set of N centered observations in a high-dimensional space ● Covariance matrix in new space: ● Again, it is necessary to solve the following system of equations: ● This means that:
  • 16. Principle ● Combining the last tree equations, we obtain: ● we define a new function ● and a new N x N matrix: ● our equation becomes:
  • 17. Principle ● let λ1 ≤ λ2 ≤ ... ≤ λN denote the eigenvalues of K, and α1, ..., αN the corresponding eigenvectors, with λp being the first nonzero eigenvalue then we require they are normalized in F: ● Encoding a data point y means computing:
  • 18. Algorithm ● Centralization For a given data set, subtracting the mean for all the observation to achieve the centralized data in RN. ● Finding principal components Compute the matrix using kernel function, find eigenvectors and eigenvalues ● Encoding training/testing data where x is a vector that encodes the training data. This can be done since we calculated eigenvalues and eigenvectors.
  • 19. Algorithm ● Reconstructing training data The operation cannot be done because eigenvectors do not have a pre-images in the original dimension. ● Reconstructing test data point The operation cannot be done because eigenvectors do not have a pre-images in the original dimension.
  • 20. Disadvantages ● Centering in original space does not mean centering in F, we need to adjust the K matrix as follows: ● KPCA is now a parametric technique: ○ choice of a proper kernel function ■ Gaussian, sigmoid, polynomial ○ Mercer's theorem ■ k(x,y) must be continue, simmetric, and semi-defined positive (xTAx ≥ 0) ■ it guarantees that there are non-zero eigenvalues ● Data reconstruction is not possible, unless using approximation formula:
  • 21. Advantages ● Time complexity ○ we will return to this point later ● Handle non linearly separable problems ● Extraction of more principal components than PCA ○ Feature extraction vs. dimensionality reduction
  • 22. Experiments ● Applications ● Data Sets ● Methods compared ● Assessment ● Experiments ● Results
  • 23. Applications ● Clustering ○ Density Estimation ■ ex High correlation between features ○ De-noising ■ ex Lighting removing from bright images ○ Compression ■ ex Image compression ● Classification ○ ex categorisations
  • 24. Datasets Experiment Name Created by Representation y x2 C y= x2 ● Simple 1+2 = 3 Uniform distribution C noise sd 0.1 - Unlabelled example1 Dist [-1, 1] - 2 Dimensions Three clusters 1+2 = 3 Three Gaussians - Unlabelled ● Simple sd = 0.1 - 2 Dimensions example2 Dist [1,1] x [0.5, 1] Kernels A circle and square The eleven gaussians - Unlabelled ● De-noising Dist [-1, 1] with zero mean - 10 Dimensions ● USPS Hand written digit Character -Labelled Recognition -256 Dimensions -9298 Digits
  • 25. Experiments 1 Simple Example 1 experiment Dataset : 1+ 2 = 3 The uniform dist sd = 0.2 Kernel: Polynomial 1 – 4 2 USPS Character Recognition Parameters Dataset: USPS Kernel PCA Kernel Polynomial 1 7 Components 32 2048 (x x2) Methods Five layer Neural Networks Kernel SVM PCA SVM Neural Networks and SVM The best parameters for the task 3 De- noising Parameters Dataset: De-noising 11 gaussians sd = 0.1 The best parameters for the task Methods Kernel Autoencoders Principal Curves Kernel PCA Linear PCA 4 Kernels Parameters The best parameters for the task Radial Basis Function Sigmoid
  • 26. Methods These are the methods we used in the experiments Dimensionality reduction Classification ● Supervised Unsupervised Linear PCA Linear Neural Networks Kernel PCA ● SVM Kernel Autoencoders Linear Non ● Kernel LDA Principal Curves Face Recognition
  • 27. Assessment ● 1 Accuracy Classification: Exact Classification Clustering: Comparable to other clusters ● ● 2 Time Complexity ● The time to compute ● ● 3 Storage Complexity ● The storage of the data ● ● 4 Interpretability ● How easy it is to understand
  • 28. Simple Example ● Recreated example ● Nonlinear PCA paper ex Dataset: The USPS Handwritten digits Dataset: 1+ 2 =3 The uniform dist with sd 0.2 Training set: 3000 Classifier: The polynomial Kernel 1 - 4 Classifier: The SVM dot product Kernel 1 -7 PC: 32 – 2048 x2 PC: 1 – 3 The eigenvector 3D 1 -3 of highest by a eigenvalue Kernel Do PCA Kernel Polynomial 1 -4 Accurate 2D The function y = x2 + B Clustering of Non with noise B of sd= 0.2 linear from uniform distribution features [-1, 1]
  • 29. Character recognition Dataset: The USPS Handwritten digits Training set: 3000 Classifier: The SVM dot product Kernel 1 -7 PC: 32 – 2048 (x x2) ● The performance is better for Linear Classifier trained on non linear components than linear components ● The performance is improved from linear as the number of component is increased Fig The result of the Character Recognition experiment ( )
  • 30. De-noising Dataset: The De-noising eleven gaussians Training set: 100 Classifier: The Gaussian Kernel sd parameter PC: 2 The de-noising on non linear feature of the distribution Fig The result of the denoising experiment ( )
  • 31. Kernels The choice of Kernel regulates the accuracy of the algorithm and is dependent on the application. The Mercer Kernels Gram Matrix are Experiments Radial Basis Function Dataset Three gaussian sd 0.1 Classifier y exp x y 0.1 Kernel 1 4 PC 1 8 Sigmoid Dataset Three Gaussian sd 0.1 Classifier Kernel PC 1 3
  • 32. Results -The PC 1-2 separate the 3 clusters RBF - The PC of 3 -5 half the clusters PC 1 PC 2 PC 3 PC 4 -The PC of 6-8 split them orthogonally PC 5 PC 6 PC 7 PC8 The clusters are split to 12 places. Sigmoid -The PC 1 -2 separates the 3 clusters - The PC 3 half the 3 clusters -The same no of PC’s to separate PC 1 PC2 clusters. PC3 - The Sigmoid needs < PC to half.
  • 33. Results Experiment 1 Experiment 2 Experiment 3 Experiment 4 1 Accuracy Kernel Polynomial 4 Polynomial 4 Gaussian 0.2 Sigmoid Components 8 Split to 12 512 2 3 split to 6 Accuracy 4.4 2 Time 3 Space 4 Interpretability Very Good Very Good Complicated Very good
  • 34. Discussions: KDA Kernel Fisher Discriminant (KDA) Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf , Klaus-Robert Müller ● Best discriminant projection http://lh3.ggpht.com/_qIDcOEX659I/S14l1wmtv6I/AAAAAAAAAxE/3G9kOsTt0VM/s1600-h/kda62.png
  • 35. Discussions Doing PCA in F rather in Rd ● The first k principal components carry more variance than any other k directions ● The mean squared error observed by the first k principles is minimal ● The principal components are uncorrelated
  • 36. Discussions Going into a higher dimensionality for a lower dimensionality ● Pick the right high dimensionality space Need of a proper kernel ● What kernel to use? ○ Gaussian, sigmoidal, polynomial ● Problem dependent
  • 37. Discussions Time Complexity ● Alot of features (alot of dimensions). ● KPCA works! ○ Subspace of F (only the observed x's) ○ No dot product calculation ● Computational complexity is hardly changed by the fact that we need to evaluate kernel function rather than just dot products ○ (if the kernel is easy to compute) ○ e.g. Polynomial Kernels Payback: using linear classifier.
  • 38. Discussions Pre-image reconstruction maybe impossible Approximation can be done in F Need explicite ϕ ● Regression learning problem ● Non-linear optimization problem ● Algebric Solution (rarely)
  • 39. Discussions Interpretablity ● Cross-Features Features ○ Dependent on the kernel ● Reduced Space Features ○ Preserves the highest variance among data in F.
  • 40. Conclusions Applications ● Feature Extraction (Classification) ● Clustering ● Denoising ● Novelty detection ● Dimensionality Reduction (Compression)
  • 41. References [1] J.T. Kwok and I.W. Tsang, “The Pre-Image Problem in Kernel Methods,” IEEE Trans. Neural Networks, vol. 15, no. 6, pp. 1517-1525, 2004. [2] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504-507. [3] Sebastian Mika , Gunnar Rätsch , Jason Weston , Bernhard Schölkopf , Klaus-Robert Müller [4] Trevor Hastie; Werner Stuetzle, “Principal Curves,” Journal of the American Statistical Association, Vol. 84, No. 406. (Jun. 1989), pp. 502-516. [5] G. Moser, "Analisi delle componenti principali", Tecniche di trasformazione di spazi vettoriali per analisi statistica multi-dimensionale. [6] I.T. Jolliffe, "Principal component analysis", Spriger-Verlag, 2002. [7] Wikipedia, "Kernel Principal Component Analysis", 2011. [8] A. Ghodsi, "Data visualization", 2006. [9] B. Scholkopf, S. Mika, A. Smola, G. Ratsch, and K.R. Muller, "Kernel PCA pattern reconstruction via approximate pre-images". In Proceedings of the 8th International Conference on Artificial Neural Networks, pages 147 - 152, 1998.
  • 42. References [10] J.T.Kwok, I.W.Tsang, "The pre-image problem in kernel methods", Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003), 2003. ● K-R, Müller, S, Mika, G, Rätsch, K,Tsuda, and B, Schölkopf “An Introduction to Kernel-Based Learning Algorithms” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 12, NO. 2, MARCH 2001 ● S, Mika, B, Schölkopf, A, Smola Klaus-Robert M¨uller, M,Scholz, G, Rätsch “Kernel PCA and De-Noising in Feature Spaces”