Cyathodium bryophyte: morphology, anatomy, reproduction etc.
APPIS-FDGPET.pdf
1. FDG-PET combined with LVQ and relevance learning:
• diagnosis of neurodegenerative diseases
• harmonization of multi-source data
Rick van Veen Sofie Lövdal
Michael Biehl
Bernoulli Institute for Mathematics,
Computer Science and Artificial
Intelligence
University of Groningen / NL
APPIS 2023
4. brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
diagnosis of neurodegenerative diseases
Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders
overview
5. brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
diagnosis of neurodegenerative diseases
Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders
reliable classification across centers ?
suggested correction scheme for GMLVQ
overview
6. brief recap:
Generalized Matrix Relevance Learning Vector Quantization (GMLVQ)
machine learning analysis:
FDG-PET scan brain images subject scores derived from 3D images
diagnosis of neurodegenerative diseases
Alzheimer’s disease (AD), Parkinson’s disease (PD), and other disorders
reliable classification across centers ?
suggested correction scheme for GMLVQ
Outlook / open problems
overview
7. prototype-based classification
• represent the data by one or
several prototypes per class
• classify a query according to the
label of the nearest prototype
• local decision boundaries acc.
to (e.g.) Euclidean distances
N-dim. feature space
?
Learning Vector Quantization [Kohonen]
can be modified to
- differentiable distance / dissimilarity measures
- adaptive distances in Relevance Learning
9. generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009]
= [ ⌦ (w x) ]
2
d(w, x) = (w x)
>
⇤ (w x)
Generalized Matrix Relevance LVQ (GMLVQ)
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
10. generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009]
training: adaptation of prototypes
and distance measure guided by
asuitable cost function w.r.t.
= [ ⌦ (w x) ]
2
d(w, x) = (w x)
>
⇤ (w x)
Generalized Matrix Relevance LVQ (GMLVQ)
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
<latexit sha1_base64="evXoIa2ePqY2a6L8Tk2ZdP7HS8U=">AAAB+3icbVDLSsNAFJ3UV42Pxrp0M1gEBSmJKLqzIII7K9gHNKFMppN26EwSZiZqCfkVNy4UcevKv3DnJ/gXTtoutPXAwOGce7lnjh8zKpVtfxmFhcWl5ZXiqrm2vrFZsrbKTRklApMGjlgk2j6ShNGQNBRVjLRjQRD3GWn5w4vcb90RIWkU3qpRTDyO+iENKEZKS12r7F5z0keH0OVIDfwgvc+6VsWu2mPAeeJMSaVW+j4/MD8u613r0+1FOOEkVJghKTuOHSsvRUJRzEhmuokkMcJD1CcdTUPEifTScfYM7mmlB4NI6BcqOFZ/b6SISznivp7ME8pZLxf/8zqJCs68lIZxokiIJ4eChEEVwbwI2KOCYMVGmiAsqM4K8QAJhJWuy9QlOLNfnifNo6pzUrVvdBvHYIIi2AG7YB844BTUwBWogwbA4AE8gmfwYmTGk/FqvE1GC8Z0Zxv8gfH+AzUjlsk=</latexit>
⌦, w
11. generalized quadratic distance in LVQ: [Schneider, Biehl, Hammer, 2009]
training: adaptation of prototypes
and distance measure guided by
asuitable cost function w.r.t.
= [ ⌦ (w x) ]
2
d(w, x) = (w x)
>
⇤ (w x)
Generalized Matrix Relevance LVQ (GMLVQ)
summarizes
• the contribution of a single dimension
• the relevance of original features in the classifier
relevance learning: parameterized adaptive distance measure
instead of naïve Euclidean
interpretation:
<latexit sha1_base64="evXoIa2ePqY2a6L8Tk2ZdP7HS8U=">AAAB+3icbVDLSsNAFJ3UV42Pxrp0M1gEBSmJKLqzIII7K9gHNKFMppN26EwSZiZqCfkVNy4UcevKv3DnJ/gXTtoutPXAwOGce7lnjh8zKpVtfxmFhcWl5ZXiqrm2vrFZsrbKTRklApMGjlgk2j6ShNGQNBRVjLRjQRD3GWn5w4vcb90RIWkU3qpRTDyO+iENKEZKS12r7F5z0keH0OVIDfwgvc+6VsWu2mPAeeJMSaVW+j4/MD8u613r0+1FOOEkVJghKTuOHSsvRUJRzEhmuokkMcJD1CcdTUPEifTScfYM7mmlB4NI6BcqOFZ/b6SISznivp7ME8pZLxf/8zqJCs68lIZxokiIJ4eChEEVwbwI2KOCYMVGmiAsqM4K8QAJhJWuy9QlOLNfnifNo6pzUrVvdBvHYIIi2AG7YB844BTUwBWogwbA4AE8gmfwYmTGk/FqvE1GC8Z0Zxv8gfH+AzUjlsk=</latexit>
⌦, w
12. empirical observation / theory:
relevance matrix becomes
singular, dominated by
very few eigenvectors
identifies a discriminative
(low-dimensional) subspace
Relevance Matrix
13. empirical observation / theory:
relevance matrix becomes
singular, dominated by
very few eigenvectors
identifies a discriminative
(low-dimensional) subspace
facilitates discriminative
visualization / low-dimensional
representation of datasets
Relevance Matrix
3-class data set (iris flowers)
14. Analysis of FDG-PET image data for the
diagnosis of neurodegenerative disorders
R. van Veen, S.K. Meles, R.J. Renken, F.E. Reesink, W.H. Oertel,
A. Janzen, G.-J. de Vries, K.L. Leenders, M. Biehl
FDG-PET combined with learning vector quantization allows
classification of neurodegenerative diseases and reveals the
trajectory of idiopathic REM sleep behavior disorder
Computer Methods and Programs in Biomedicine 225: 107042 (2022)
17. Subjects
Source HC PD AD
CUN 19 49 -
UGOSM 44 58 55
UMCG 19 20 21
FDG-PET brain scans from 3 centers
• Clínica Universidad de Navarra
• Univ. Genoa/IRCCS San Martino
• Univ. Medical Center Groningen
Glucose
uptake
http://glimpsproject.com
subjects
A
B
C
FDG-PET 3D images
18 F-Fluorodeoxyglucose
positron emission tomography
Healthy Controls HC
Parkinson’s Disease PD
Alzheimer’s Disease AD
data
20. 8
work flow
subjects
~
200000
voxels
subject specific
anatomy
high intensity,
low noise voxels
log-transform
masking
low-dimensional
projections
by SSM/PCA
subjects
~50
subject
scores
(PCA)
Scaled Subprofile Model / PCA based
on a separate reference group of subjects
24. results
performance evaluation:
averages over 10 randomized runs of 10-fold cross-validation
accuracies, sensitivity /specificity
Receiver Operating Characteristics for binary classification
25. results
subjects from one center only (training and testing)
e.g. UGOSM
(ROC)
±0.008
performance evaluation:
averages over 10 randomized runs of 10-fold cross-validation
accuracies, sensitivity /specificity
Receiver Operating Characteristics for binary classification
(lin.)
(lin.)
(lin.)
26. results
subjects from one center only (training and testing)
e.g. UGOSM
(ROC)
±0.008
performance evaluation:
averages over 10 randomized runs of 10-fold cross-validation
accuracies, sensitivity /specificity
Receiver Operating Characteristics for binary classification
also: good multi-class accuracies (within centers)
(lin.)
(lin.)
(lin.)
27. GMLVQ multi-class (UMCG)
PD: Parkinson’s disease
DLB: Dementia with
Lewy Bodies
HC: Healthy Control
AD: Alzheimer’s Disease
28. GMLVQ multi-class (UMCG)
PD: Parkinson’s disease
DLB: Dementia with
Lewy Bodies
HC: Healthy Control
AD: Alzheimer’s Disease
identify outliers (*) and clusters
PD (1a): young patients, onset of PD and/or mild cognitive impairment
PD (1b): elder PD patients, progressed to PD dementia later
AD (2a): AD subtype (n=3?)
AD (2b): AD patients with mild cognitive impairment
29. HC
PD DLB
AD
RBD1 (baseline, first scan)
RBD2 (follow up, ca 4 yrs.)
Rapid Eye Movement sleep behavior disorder (iRBD)
trajectory of patients in the {HC, PD, DLB, AD}-discriminative space,
post-hoc projections, iRBD not used in training process
frequent trend:
HC → PD/DLB
disease progression (iRBD)
30. HC
PD DLB
AD
RBD1 (baseline, first scan)
RBD2 (follow up, ca 4 yrs.)
Rapid Eye Movement sleep behavior disorder (iRBD)
trajectory of patients in the {HC, PD, DLB, AD}-discriminative space,
post-hoc projections, iRBD not used in training process
frequent trend:
HC → PD/DLB
RBD3 (in progress)
disease progression (iRBD)
31. Center / source harmonization in GMLVQ training
R. van Veen, N.R. Bari Tamboli, S. Lövdal, S.K. Meles, R.J. Renken,
G.-J. de Vries, D. Arnaldij, S. Morbellik, M.C. Rodriguez Oroz,
P. Claver, K.L. Leenders,T. Villmann, M. Biehl
Subspace Corrected Relevance Learning with Application in Neuroimaging
in preparation (2023)
general problem in many application domains: different sources,
e.g. technical platforms, preprocessing pipelines, batch effects …
here: data from different medical centers
32. e.g. PD vs. HC
unbiased classifiers (ROC)
within center
(example: UGOSM)
across-center performance
(lin.)
33. e.g. PD vs. HC
unbiased classifiers (ROC)
within center
(example: UGOSM)
across centers: poor performance
across-center performance
(lin.)
(lin.)
(lin.)
(lin.)
34. classification experiment - can we infer the medical center ?
identification of centers
HC only Classifier Sens. (%) Spec. (%) AUC (ROC)
CUN vs.
UGOSM
SVM (lin.) 99.75 93.00 1.00
GMLVQ 97.30 91.00 0.99
35. classification experiment - can we infer the medical center ?
identification of centers
possible explanations
- center-specific (pre-)processing despite supposedly
identical equipment and work flows
- significantly different patient cohorts (not the case in HC)
HC only Classifier Sens. (%) Spec. (%) AUC (ROC)
CUN vs.
UGOSM
SVM (lin.) 99.75 93.00 1.00
GMLVQ 97.30 91.00 0.99
39. Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
subspace corrected GMLVQ
40. Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
2) train a second GMLVQ system for the actual target classification
(discrimation of diseases) restricting the relevance matrix
to the space orthogonal to subspace U by a correction scheme
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
<latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit>
⌦ ! ⌦
"
I
K
X
k=1
ukuk
>
#
e.g. after each individual GMLVQ update
subspace corrected GMLVQ
41. Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
2) train a second GMLVQ system for the actual target classification
(discrimation of diseases) restricting the relevance matrix
to the space orthogonal to subspace U by a correction scheme
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
<latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit>
⌦ ! ⌦
"
I
K
X
k=1
ukuk
>
#
e.g. after each individual GMLVQ update
subspace corrected GMLVQ
UGOSM (Italy): 49 HC 38 PD [early stage PD]
CUN (Spain): 20 HC 68 PD [late stage PD]
a clear-cut example problem (subset of data, 2 centers only):
42. Basic idea:
1) identify K discriminative directions, subspace
from a GMLVQ-system for the discrimination of centers
ideally w.r.t. to a separate cohort (e.g. matching HC)
2) train a second GMLVQ system for the actual target classification
(discrimation of diseases) restricting the relevance matrix
to the space orthogonal to subspace U by a correction scheme
<latexit sha1_base64="DU0GuvN1dVy+YZeN0L8zOSseYc0=">AAACK3icbVC7TsMwFHV4lvIqMLJYICRgqBIegqVSBQuCpUi0IDUlclynteo4kX2DqKJ8Av/Bwq8wwMBDrGx8BG7LAC1HsnV8zr3yvcePBddg22/W2PjE5NR0biY/Oze/sFhYWq7pKFGUVWkkInXlE80El6wKHAS7ihUjoS/Ypd857vmXN0xpHskL6MasEZKW5AGnBIzkFY6quITd0I9uUx0TmWFXsAA2+7ebGodA2w/SxOsYS/FWG9zMSzslJ7s+G7y3vMK6XbT7wKPE+SHr5e3d0687Fype4cltRjQJmQQqiNZ1x46hkRIFnAqW5d1Es5jQDmmxuqGShEw30v6uGd4wShMHkTJHAu6rvztSEmrdDX1T2ZtdD3s98T+vnkBw2Ei5jBNgkg4+ChKBIcK94HCTK0ZBdA0hVHEzK6ZtoggFE2/ehOAMrzxKajtFZ79on5s09tAAObSK1tAmctABKqMTVEFVRNE9ekQv6NV6sJ6td+tjUDpm/fSsoD+wPr8BF3OrMA==</latexit>
U = span
⇣
{uk}
K
k=1
⌘
<latexit sha1_base64="AvHRcyGOJhi5hxZyN9Isz6/pHc8=">AAACRnicbVBBSxtBFH4bbbWprakeexmUUg9t2BWlpSAIglh6qIJRIbtZZidvkyEzu8vMWyEs+XUe9OxF/AlePLSI104SoVX7wTDffN97M2++pFDSku9fe7WZ2Rcv5+Zf1V8vvHm72Hi3dGTz0ghsiVzl5iThFpXMsEWSFJ4UBrlOFB4ng52xf3yKxso8O6RhgZHmvUymUnByUtyIwp8ae5yFn1hI+WT7KyhMqc2+O/qZhbbUcTXYCkadH4yFmlM/SasyHoweHTruloKFRvb6FLG4seo3/QnYcxI8kNXtbx+vOmz3fD9uXIbdXJQaMxKKW9sO/IKiihuSQuGoHpYWCy4GvIdtRzOu0UbVJIYR++CULktz41ZGbKL+21Fxbe1QJ65yPLF96o3F/3ntktKvUSWzoiTMxPShtFTMBTbOlHWlQUFq6AgXRrpZmehzwwW55OsuhODpl5+To/VmsNn0D1waGzDFPLyHFViDAL7ANuzBPrRAwBncwC/47V14t96ddz8trXkPPcvwCDX4AwXnsjw=</latexit>
⌦ ! ⌦
"
I
K
X
k=1
ukuk
>
#
e.g. after each individual GMLVQ update
subspace corrected GMLVQ
UGOSM (Italy): 49 HC 38 PD [early stage PD]
CUN (Spain): 20 HC 68 PD [late stage PD]
a clear-cut example problem (subset of data, 2 centers only):
target
classification
43. First step: classify according to data source (CUN, UGOSM)
on the basis of Healthy Controls only, identify
discriminative eigenvector of the relevance matrix (1)
early PD vs. late PD
44. First step: classify according to data source (CUN, UGOSM)
on the basis of Healthy Controls only, identify
discriminative eigenvector of the relevance matrix (1)
<latexit sha1_base64="eNcDcD5ml+Qrz97LtpCpN2NmuIg=">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKojcDXjxGMA9MQpidzCZDZmeXmV4hLPkLLxEU8eoX+Bve/Btnkxw0saChqOqmq9uPpTDout9ObmV1bX0jv1nY2t7Z3SvuH9RNlGjGayySkW761HApFK+hQMmbseY09CVv+MObzG88cm1EpO5xFPNOSPtKBIJRtNJDO6Q48IM0GXeLJbfsTkGWiTcnpevPSYbnarf41e5FLAm5QiapMS3PjbGTUo2CST4utBPDY8qGtM9blioactNJp4nH5MQqPRJE2pZCMlV/T6Q0NGYU+rYzS2gWvUz8z2slGFx1UqHiBLlis0VBIglGJDuf9ITmDOXIEsq0sFkJG1BNGdonFewTvMWTl0n9rOxdlN07t1Q5hxnycATHcAoeXEIFbqEKNWCg4Ale4NUxzsR5c95nrTlnPnMIf+B8/AAWcJWa</latexit>
u
early PD vs. late PD
45. First step: classify according to data source (CUN, UGOSM)
on the basis of Healthy Controls only, identify
discriminative eigenvector of the relevance matrix (1)
Second step: train classifier “early PD” vs. “late PD” with
correct relevance matrix
after each individual GMLVQ update step in (2)
<latexit sha1_base64="cSc/CoXqNvnOdAZQgm7ryaLEmJg=">AAACKnicbZC7SgNBFIZnvcZ4i1pqMRgECw27omgZsdFKBaOB7BpmJ2eTIbMXZs4GwpLSZ7Gxs/UVbFIowdYHcZIoePth4OM/5zDn/H4ihUbbHlgTk1PTM7O5ufz8wuLScmFl9VrHqeJQ4bGMVdVnGqSIoIICJVQTBSz0Jdz47ZNh/aYDSos4usJuAl7ImpEIBGdorHrh2D0PocmoizH9wh3qSgiwdkZ3qRsybPlBlva+4a3pTqirRLOFXr1QtEv2SPQvOJ9QLG8Ed2+dx6eLeqHvNmKehhAhl0zrmmMn6GVMoeASenk31ZAw3mZNqBmMWAjay0an9uiWcRo0iJV5EdKR+30iY6HW3dA3ncN19e/a0PyvVksxOPIyESUpQsTHHwWppCaXYW60IRRwlF0DjCthdqW8xRTjaNLNmxCc3yf/heu9knNQsi9NGvtkrBxZJ5tkmzjkkJTJKbkgFcLJPXkmL+TVerD61sB6G7dOWJ8za+SHrPcP0OiqsQ==</latexit>
⌦ ! ⌦
⇥
I uu>
⇤
<latexit sha1_base64="eNcDcD5ml+Qrz97LtpCpN2NmuIg=">AAAB8XicbVDLSgNBEOyNrxhfUY9eBoPgKeyKojcDXjxGMA9MQpidzCZDZmeXmV4hLPkLLxEU8eoX+Bve/Btnkxw0saChqOqmq9uPpTDout9ObmV1bX0jv1nY2t7Z3SvuH9RNlGjGayySkW761HApFK+hQMmbseY09CVv+MObzG88cm1EpO5xFPNOSPtKBIJRtNJDO6Q48IM0GXeLJbfsTkGWiTcnpevPSYbnarf41e5FLAm5QiapMS3PjbGTUo2CST4utBPDY8qGtM9blioactNJp4nH5MQqPRJE2pZCMlV/T6Q0NGYU+rYzS2gWvUz8z2slGFx1UqHiBLlis0VBIglGJDuf9ITmDOXIEsq0sFkJG1BNGdonFewTvMWTl0n9rOxdlN07t1Q5hxnycATHcAoeXEIFbqEKNWCg4Ale4NUxzsR5c95nrTlnPnMIf+B8/AAWcJWa</latexit>
u
early PD vs. late PD
47. uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
consistent with
continuos progression
well-separated
early/late stages (?)
48. uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
consistent with
continuos progression
well-separated
early/late stages (?)
49. uncorrected: acc. 98% corrected: acc. 86%
early PD vs. late PD
seemingly favorable performance
mainlydue to center-specific bias
purely center-specific
variation removed
consistent with
continuos progression
well-separated
early/late stages (?)
50. outlook
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
51. outlook
A Learning Vector Quantization Architecture for Transfer Learning Based
Classification in Case of Multiple Sources by Means of Null-Space Evaluation
T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden
Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205
single-step training, combining of target classification and
source separation with orthogonality constraint
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
52. outlook
A Learning Vector Quantization Architecture for Transfer Learning Based
Classification in Case of Multiple Sources by Means of Null-Space Evaluation
T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden
Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205
single-step training, combining of target classification and
source separation with orthogonality constraint
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
iterative removal of center-discriminative directions
53. outlook
A Learning Vector Quantization Architecture for Transfer Learning Based
Classification in Case of Multiple Sources by Means of Null-Space Evaluation
T. Villmann, D. Staps, J. Ravichandran, S. Saralajew, M. Biehl, M. Kaden
Proc. IDA 2022, Adv. in Intelligent Data Analysis XX, 354-363, Springer LNCS 13205
single-step training, combining of target classification and
source separation with orthogonality constraint
training of target classification with
penalty term w.r.t. variance of HC data
[with Umberto Petruzzello]
- simplified realizations, variations of the idea ?
- availability of separate control groups ?
- suitable dimension of center-specific subspace ?
- appropriate quality measure for evaluation ?
iterative removal of center-discriminative directions