In two-class score-based problems the combination of scores from an ensemble of experts is generally used to obtain distributions for positive and negative patterns that exhibit a larger degree of separation than those of the scores to be combined. Typically, combination is carried out by a "static" linear combination of scores, where the weights are computed by maximising a
performance function. These weights are equal for all the patterns, as they are assigned to each of the expert to be combined. In this paper we propose a "dynamic" formulation where the weights are computed individually for each pattern. Reported results on a biometric dataset show the effectiveness of the proposed combination methodology with respect to "static" linear combinations and trained combination rules.
Call Girls Agency In Goa 💚 9316020077 💚 Call Girl Goa By Russian Call Girl ...
Dynamic Score Combination: A supervised and unsupervised score combination method
1. Dynamic Score Combination
a supervised and unsupervised
score combination method
R. Tronci, G. Giacinto, F. Roli
DIEE - University of Cagliari, Italy
Pattern Recognition and Applications Group
http://prag.diee.unica.it
MLDM 2009 - Leipzig, July 23-25, 2009
2. Outline
! Goal of score combination mechanisms
! Dynamic Score Combination
! Experimental evaluation
! Conclusions
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 2
3. Behavior of biometric experts
Genuine scores should produce
a positive outcome
Impostor scores should produce
a negative outcome
th
FNMRj (th) = $ p(s j
| s j ! positive)ds j = P(s j % th | s j ! positive)
"#
#
FMRj (th) = $ p(s j
| s j ! negative)ds j = P(s j > th | s j ! negative)
th
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 3
4. Performance assessment
! True Positive Rate = 1 - FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 4
5. Goal of score combination
! To improve system reliability, different
experts are combined
! different sensors, different features, different
matching algorithms
! Combination is typically performed at the
matching score level
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 5
6. Goal of score combination
Combined score
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 6
7. Goal of score combination
! The aim is to maximize the separation
between classes
e.g.
(µ )
2
gen ! µimp
FD =
" gen + " imp
2 2
! Thus the distributions have to be shifted far
apart, and the spread of the scores reduced
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 7
8. Static combination
! Let E = {E1,E2,…Ej,…EN} be a set of N experts
! Let X = {xi} be the set of patterns
! Let fj ( ) be the function associated to expert Ej that produces
a score sij = fj(xi) for each pattern xi
Static linear combination
N
si* = # ! j " sij
j =1
! The weights are computed as to maximize some
measure of class separability on a training set
! The combination is static with respect to the test
pattern to be classified
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 8
9. Dynamic combination
The weights of the combination also depends
on the test pattern to be classified
N
si* = # ! ij " sij
j =1
The local estimation of combination
parameters may yield better results than the
global estimation, in terms of separation
between the distributions of scores si*
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 9
10. Estimation of the parameters
for the dynamic combination
! Let us suppose without loss of generality
s i1 ! s i2 ! ! ! siN
! The linear combination of three experts
! i1si1 + ! i 2 si 2 + ! i 3 si 3 ! ij "[ 0,1]
can also be written as " i1si1 + si 2 + " i!3 si 3
!
which is equivalent to " i1si1 + " i!! si 3
!! 3
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 10
11. Estimation of the parameters
for the dynamic combination
! This reasoning can be extended to N experts,
so we can get
( )
si* = !i1 min sij + !i 2 max sij
j j
( )
! Thus, for each pattern we have to estimate
two parameters
! If we set the constraint !i1 + !i 2 = 1
only one parameter has to be estimated and
si* ! [minj(sij),maxj(sij)]
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 11
12. Properties of the Dynamic
Score Combination
( )
si* = !i max sij + (1 " !i ) min sij
j j
( )
! This formulation embeds the typical static
combination rules #" J sij $ min ( sij )
N
j
j =1
Linear combination !i =
( ) ( )
!
max sij $ min sij
j j
1 N
( )
" sij # min sij
N j =1 j
Mean rule !i =
max ( s ) # min ( s )
!
ij ij
j j
! Max rule for "i = 1 and Min rule for "i = 0
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 12
13. Properties of the Dynamic
Score Combination
( )
si* = !i max sij + (1 " !i ) min sij
j j
( )
! This formulation also embeds the Dynamic
Score Selection (DSS)
"1 if xi belongs to the positive class
!i = #
$0 if xi belongs to the negative class
! DSS clearly maximize class separability if the
estimation of the class of xi is reliable
! e.g., a classifier trained on the outputs of the
experts E
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 13
14. Supervised estimation of "i
( )
si* = !i max sij + (1 " !i ) min sij
j j
( )
! "i = P(pos|xi,E)
P(pos|xi,E) can be estimated by a classifier
trained on the outputs of the experts E
! "i is estimated by a supervised procedure
! This formulation can also be seen as a soft
version of DSS
! P(pos|xi,E) accounts for the uncertainty in class
estimation
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 14
15. Unsupervised estimation of "i
( )
si* = !i max sij + (1 " !i ) min sij
j j
( )
! "i is estimated by an unsupervised procedure
! the estimation does not depend on a training set
1 N
Mean rule !i = " sij
N j =1
Max rule !i = max sij
j
( )
Min rule !i = min sij
j
( )
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 15
16. Dataset
! The dataset used is the Biometric Scores Set
Release 1 of the NIST
http://www.itl.nist.gov/iad/894.03/biometricscores/
! This dataset contains scores from 4 experts related
to face and fingerprint recognition systems.
! The experiments were performed using all the
possible combinations of 3 and 4 experts.
! The dataset has been divided into four parts, each
one used for training and the remaining three for
testing
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 16
17. Experimental Setup
! Experiments aimed at assessing the performance of
! The unsupervised Dynamic Score Combination (DSC)
! "i estimated by the Mean, Max, and Min rules
! The supervised Dynamic Score Combination
! "i estimated by k-NN, LDC, QDC, and SVM classifiers
! Comparisons with
! The Ideal Score Selector (ISS)
! The Optimal static Linear Combination (Opt LC)
! The Mean, Max, and Min rules
! The linear combination where coefficients are estimated by
the LDA
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 17
18. Performance assessment
! Area Under the ROC Curve (AUC)
! Equal Error Rate (ERR)
µ gen " µimp
! d! =
# gen # imp
2 2
+
2 2
! FNMR at 1% and 0% FMR
! FMR at 1% and 0% FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 18
20. DSC Mean Vs. Mean rule
Combination of three experts
DSC Mean
AUC !!0.9991
EER !!0.0052
d' !!!4.4199
Mean rule
AUC !!0.9986
EER !!0.0129
d' !!!4.0732
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 20
21. Unsupervised DSC Vs. fixed rules
AUC
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 21
22. Unsupervised DSC Vs. fixed rules
EER
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 22
23. Unsupervised DSC Vs. fixed rules
FMR at 0% FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 23
24. DSC Mean Vs. supervised DSC
AUC
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 24
25. DSC Mean Vs. supervised DSC
EER
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 25
26. DSC Mean Vs. supervised DSC
FMR at 0% FNMR
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 26
27. Conclusions
! The Dynamic Score Combination mechanism
embeds different combination modalities
! Experiments show that the unsupervised DSC
usually outperforms the related “fixed” combination
rules
! The use of a classifier in the supervised DSC allows
attaining better performance, at the expense of
increased computational complexity
! Depending on the classifier, performance are very
close to those of the optimal linear combiner
Giorgio Giacinto MLDM 2009 - July 23-25, 2009 27