SlideShare ist ein Scribd-Unternehmen logo
1 von 10
Downloaden Sie, um offline zu lesen
Journal of Multivariate Analysis 96 (2005) 374 – 383
                                                                                 www.elsevier.com/locate/jmva




     A measure of independence for a multivariate
     normal distribution and some connections with
                     factor analysis
                                            Martin Knott∗
          Statistics Department, London School of Economics and Political Science, Houghton Street,
                                          London WC2A2AE, UK
                                          Received 28 January 2004
                                      Available online 7 December 2004



Abstract
   This paper gives results for the population value of a measure of the goodness-of-fit of a general
multivariate normal distribution to the simpler hypothesis of independent normal variables. The mea-
sure was introduced by Rudas, Clogg and Lindsay in 1994, who gave the value for the bivariate normal
distribution. Connections with factor analysis are briefly discussed.
© 2004 Elsevier Inc. All rights reserved.

AMS 1991 subject classification: 62H20; 62H25; 46N10

Keywords: Goodness-of-fit; Convex optimisation; Minimum trace factor analysis




1. Introduction

   Rudas, Clogg and Linday [9] introduced a new index of fit for contingency table models.
Their idea was to measure the population goodness-of-fit of a model by writing the popu-
lation under investigation as a mixture of the population under the model and an arbitrary
population. The index of fit that they proposed is the largest mixing probability that can
be given to the model so that the mixture representation is feasible. Their measure is the
maximum proportion of the population under investigation that can be thought of as coming

  ∗ Fax: +44 0171 955 7416.
   E-mail address: m.knott@lse.ac.uk.

0047-259X/$ - see front matter © 2004 Elsevier Inc. All rights reserved.
doi:10.1016/j.jmva.2004.10.013
M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383       375


from the population under the assumptions of the model. It lies between 0 and 1, and is
close to 1 for a model that fits well. To use the index in practice one must estimate it from
a sample, but this paper concentrates on the population quantities.
   This attractive index of fit (which is applicable much more widely than to models for
contingency tables) has been further investigated in [3,8].
   In order to check if the measure is as attractive as it seems, it is useful to have its value
worked out in a few simple cases. Rudas, Clogg and Lindsay gave the value of the index
of fit, say , for a general bivariate normal distribution with correlation coefficient to the
simpler model of independent normal distributions.
                       1
              1−| |    2
       =                   .                                                                 (1)
              1+| |
This is a very appealing measure of independence for a bivariate normal distribution, but it
would be good to see the index of fit to independence for a general q-dimensional multi-
variate normal distribution. In Section 2 the problem of finding the index of fit for a general
multivariate normal distribution is reduced to a one of maximising a concave function over
a convex set. A dual problem is given and also extremality relations that characterise the
solution. Section 3 applies the approach to obtain explicit solutions for equicorrelated nor-
mal variables, and gives an example of the use of a simple algorithm to obtain the index
of fit for a general multivariate normal distribution. Section 4 contains a discussion and
interpretation of the results in the context of factor analysis, in particular the method of
minimum trace estimation of the linear factor model.


2. Optimisation results

   The multivariate normal distribution will be assumed to be non-degenerate. If it has a
singular covariance matrix, then one could reduce the dimension by a rotation and continue.
The arguments of [9], see Eq. (12) in their Appendix A, show that is the largest number such
that for some choice of the univariate normal density functions gXi (xi ) for i = 1, . . . , p,
and for all x,
                 q
     fX (x)           gXi (xi ),                                                             (2)
                i=1

where fX (x) is the density function of the multivariate normal distribution of interest for the
random variables X = (X1 , . . . , Xq ) taking values x = (x1 , . . . , xq ). It has been assumed
implicitly that for the right-hand side of (2) no degenerate distributions are allowed for X,
which is necessary to include the joint distribution of X under independence in the class of
distributions allowed for the left-hand side of (2).
   One may therefore, without loss of generality, by making changes in location and scale
on both sides of (2), take the random variables X on the left-hand side of (2) to have mean
0 and variance 1. Their correlation matrix will be written as C. It is not clear at this stage
what choice of the means, say = ( 1 , . . . , q ), and the diagonal matrix of variances, say
376                  M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383


  = diag( 11 , . . . , qq ), of the normal distributions on the right-hand side of (2) will allow
 to be attained.
  Inequality (2) may be written
          1           1                            1                 1
      −     ln det C − x C−1 x          ln −         ln det     −      (x − )             −1
                                                                                               (x − ),       (3)
          2           2                            2                 2
which simplifies to
      1    det        1                                         1
        ln              x (C−1 −       −1
                                            )x +      −1
                                                           x−             −1
                                                                               .                             (4)
      2     det C     2                                         2

From (4) following the same argument as in [9], we obtain only if C−1 − −1 is negative
semidefinite.
  Let us now decide what value for gives the largest . Essentially, as a referee has pointed
out, one shows that min maxx of the right-hand side of (4) is zero.
  If C−1 − −1 is singular, then for every x in the kernel space of C−1 − −1 it must be
                 −1
required that       x = 0, otherwise for some sufficiently long x, would be forced to be
indefinitely small. So we can assume that −1 is in the image space of C−1 − −1 , and
define a as an inverse image of −1 , so that

      (C−1 −   −1
                    )a =   −1
                                 .                                                                           (5)

Then (4) becomes
          1    det
            ln
          2     det C
               1                             1                                                 1
                 (x − a) (C−1 − −1 )(x − a) − a (C−1 −                             −1
                                                                                        )a −        −1
                                                                                                         .   (6)
               2                             2                                                 2
The right-hand side of (6) can be written
                     1                                          1
                       (x − a) (C−1 −         −1
                                                   )(x − a) −     a (C−1 −               −1
                                                                                              )a
                     2                                          2
                          1
                        − a (C−1 −           −1
                                                  ) (C−1 −      −1
                                                                     )a
                          2
which simplifies to
      1
        (x − a) (C−1 −     −1
                                )(x − a) + a (C−1 + C−1 C−1 )a.
      2
Since a (C−1 + C−1 C−1 )a is non-negative for all a, the choice a = 0 allows the largest
 . Putting a = 0 implies choosing = 0, as can be seen from (5). Note that one is free to
choose a = 0 without consideration of (x − a) (C−1 − −1 )(x − a) because inequality (6)
must be true for all choices of x, which is the same as for all choices of x − a.
   Returning to (4), but taking = 0,
      1    det        1
        ln              x (C−1 −       −1
                                            )x.                                                              (7)
      2     det C     2
M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383        377


Since C−1 − −1 is negative semidefinite, the inequality is at its most stringent when x = 0.
So the index satisfies

      1    det
        ln            0                                                                      (8)
      2     det C
                                −1
for all    for which C−1 −           is negative semidefinite. That is, is the largest value of
                  1
          det     2
                                                                                             (9)
          det C

for which       = diag( 11 , . . . , qq ) satisfies the condition that C−1 − −1 is negative
semidefinite.
                                                               −1
  To find one needs to maximise det subject to C−1                 (using standard conventions
for writing the Loewner partial ordering of matrices), see [5, p. 140]. The final stage of
the proof will proceed by reformulating the last statement of the problem, and then using a
procedure based on convex optimisation methods.
                         −1
   It is true that C−1        if and only if       C [5, Exercise 13, p. 168]. So the problem
can be reformulated as that of maximising ln det subject to           C and diagonal. More
precisely, we want to find

      sup ln det[ ]                                                                         (10)

over all diagonal positive semidefinite matrices            satisfying

           C.                                                                               (11)

This form of the problem is seen as the maximisation of a concave function of the matrix
   over a convex set of such matrices. It is easy to see that ln det is concave and strictly
decreasing on the set of positive semidefinite diagonal matrices . It is obvious that the
constraint      C restricts to a convex subset of all matrices .
   So, all the well-known results from convex optimisation can be applied. When considering
a dual problem, one needs to use the norm trace(A A) for the matrix A, and the corresponding
inner product. Results from, for instance, [4, p. 62–68], or from [13, Chapter 4.1], can be
used to show that the problem in (10) and (11) has a solution, and that the solution is the
same as that for the dual problem of finding

      inf[− ln det diag D + trace CD − q]                                                   (12)

over all positive semidefinite matrices D.
   The dual problem is easier because there is no restriction on D except for positive semidef-
initeness. Rather than justify in detail the application of the general convex optimisation
theory, it seems better to prove directly Theorem 1. The notation diag D means the diagonal
matrix with diagonal elements equal to those of D. To avoid trivialities any matrix inverse
will be interpreted as a generalised inverse where necessary.
378                    M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383


Theorem 1. If the positive definite matrix ˆ and the positive semidefinite matrix D are
                                                                                ˆ
such that
      ˆ = [diag D]−1 ,
                ˆ

      ˆ   C
and satisfy the extremality relation
      trace[( ˆ − C)D] = 0
                    ˆ

then ˆ gives a solution to (10), (11) and D gives a solution to (12), and the solutions are
                                          ˆ
equal.

  The proof of Theorem 1 is as follows: From the extremality relation,
                            ln det ˆ = −trace[( ˆ − C)D] + ln det ˆ
                                                      ˆ
                                       −trace[( − C)D]ˆ + ln det
for every diagonal positive definite matrix . The last inequality comes from considering
maximum likelihood estimation of the variances −1 for independent normally distributed
                                                                  ˆ
random variables with means zero and sample covariance matrix D. It follows that for all
positive definite diagonal with       C,

      ln det ˆ     ln det .
This is enough to prove that ˆ gives a solution to (10), (11).
  On the other hand, for the dual problem,
                               ˆ          ˆ
                 − ln det diag D + trace CD − q =
                                                                       ˆ
                                                         − ln det diag D
                                                                         ˆ       ˆ
                                                         − trace[((diag D)−1 − C)D]
                                                                       ˆ
                                                       = − ln det diag D
                                                                       ˆ
                                                         − ln det diag D
                                                                         ˆ
                                                         − trace[((diag D)−1 − C)D]
                                                         − ln det diag D
                                                         − trace[((diag D)−1 − C)D]
                                                       = − ln det diag D
                                                         + trace CD − q,
where D is any positive semidefinite matrix for which diag D is invertible. The last inequality
is again using maximum likelihood estimation of the variances of independent normal
random variables with mean zero and covariance matrix D, and shows that D gives a    ˆ
solution to (12). It is also clear that the solutions for primary and dual problems are equal.
   One other immediate consequence of Theorem 1 is that for every diagonal positive
semidefinite satisfying           C, and for every positive semidefinite D it is true that
      ln det        − ln det diag D + trace CD − q,                                      (13)
M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383       379


so it is easy to generate upper bounds for the measure of independence. The left- and
right-hand side of this last inequality are also seen to be equal when = ˆ and D = D.
                                                                                   ˆ


3. Applications

  As a first application the measure of independence for equally correlated multivariate
normal variables will be displayed. The measure would be expected to change its form
according to whether the common correlation coefficient is positive or negative, since the
range of for q equally correlated normal variables is between − q−1 and 1.
                                                                 1

  Supposing that− q−1 <
                   1
                                    0, it is easily verified that taking

       ˆ          1
       D=               11 ,
            1 + (q − 1)
where 1 is a q × 1 vector with all elements equal to 1, and so

       diag ˆ = (1 + (q − 1) )Iq
gives a solution. The difference C− is equal to −q (I −11 /q) which is positive semidef-
inite (and of rank (q − 1)). The measure of independence is, from (9),
                            (q−1)
            1 + (q − 1)       2
        =                           .                                                    (14)
               1−
If 0     < 1, then taking

       ˆ          q
       D=                 [I − 11 /q],
            (q − 1)(1 − )
gives a solution. The difference C − is equal to 11 which is positive semidefinite (and
of rank 1). The measure of independence is, from (9),
                            (q−1)
             1−               2
        =                           .                                                    (15)
          1 + (q − 1)
Results (14), (15) generalise the case q = 2 in [9]. Writing ( ) for evaluated for correlation
coefficient , there is a curious reciprocal property that for       0, (− ) ( ) = 1.
   The results for equally correlated normal random variables show that as q → ∞, the
measure of independence tends to 0 for fixed . It is hard to say whether that is a surprising
property or not.
   Note that the rank of C − and also of D forˆ          0 and for    0 show examples of the
extremes of its possible values. It would not be enough simply to look always for rank 1 or
rank q − 1 matrices.
  A second application will show how easy the index is to calculate for a general correlation
matrix. The correlation matrix that appears in Table 1 is from a study by Smith and Stanley
[11] on the relation between reaction times and intelligence test scores. Factor analysis
results for these data appear in the source paper, and in [1, p. 69–72].
380                       M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383

Table 1
Correlation coefficients for Smith and Stanley’s data
1                   0.466                  0.552                0.340                 0.576   0.510
0.466               1                      0.572                0.193                 0.263   0.239
0.552               0.572                  1                    0.445                 0.354   0.356
0.340               0.193                  0.445                1                     0.184   0.219
0.576               0.263                  0.354                0.184                 1       0.794
0.510               0.239                  0.356                0.219                 0.794   1



   The dual problem is very well behaved, so it is easy to write a simple algorithm in
SPLUS to solve it. The details are given at the end in Section 5. The measure of inde-
pendence is given by this algorithm as = 0.08970137, so about 9% of the observations
could be considered to be from independent normal distributions. The eigenvalues in test
are 0.0001800282, −0.0007531514, −0.0683073036, −0.3883489215, −0.8617405225,
                                                                    ˆ
−2.7810610728 showing that to the accuracy expected (diag D)−1 C, but that with two
eigenvalues close to 0, the rank is effectively 4. This ties in with the presence of a Heywood
case that is discovered by maximum likelihood fitting of a normal factor model to this
correlation matrix, see the discussion in [1].


4. Factor analysis

   There are similarities between the problem in (10), (11) and the minimum trace method
of fitting factor analysis models introduced to psychometrics by [2]. Given an observed
covariance matrix C, the minimum trace method finds a diagonal positive semidefinite
matrix such that         C and trace[C− ] is minimised. The method thus seeks to maximise
trace while keeping          C. It is parallel to the problem in (11) and (12), but maximises trace
rather than determinant. The maximisation of trace was considered using optimisation
theory by [7,10], and in a manner closer to that used in this paper by [12]. Much of the
interest has been in the algorithms to find the solutions, see [6].
   The matrices ˆ , D giving the solution for the minimum trace problem are not in general
                       ˆ
the same as for the problem considered in this paper. One could propose a ‘maximum
determinant’ method for fitting factor analysis models, and use the methods developed in
this paper carry out the fitting, but there are perhaps too many ways to fit factor analysis
models already. As it happens, the matrices ˆ , D leading to the solutions for the maximum
                                                       ˆ
determinant for equally correlated normal variables given in Section 3 also provide a solution
for the minimum trace problem.
   It is possible to construct a family of criteria for fitting factor analysis models that includes
the maximum determinant method and the minimum trace method as special cases. All that
is necessary is to maximise over = diag( 11 , 22 , . . . , qq )

        1
            ln    ii /q

for some , 0 <       1, where as before C. The case = 1 gives the minimum trace
method, while letting → 0 gives the maximum determinant method. It is even possible
M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383    381


to generalise the weighted minimum trace method introduced by Shapiro [10] by using the
family of criteria

      1
          ln      pi   ii ,                                                                (16)

where pi are non-zero probabilities that sum to 1. Results for this extension corresponding
to Theorem 1 are given in Theorems 2 and 3, which are offered without proof, but which
can be proved in a manner not too dissimilar to Theorem 1.

Theorem 2 (Primal generalised weighted minimum trace). For 0 < < 1, = −1 and
               √                √
W = diag( p1 , . . . , pq ), where pi are non-zero probabilities adding to 1, (18), (19),
(20) are a set of sufficient conditions for a diagonal positive semidefinite matrix ˆ =
diag( ˆ 11 , ˆ 22 , . . . , ˆ qq ) to give a solution for

      1
          sup ln[trace W       W]                                                          (17)

over all positive semidefinite diagonal               C.

      ˆ    C                                                                               (18)

for s = 1, . . . , q

                          ˆ
                    (diag D) −1
      ˆ =                         ,                                                        (19)
               trace W(diag(D)) W

      trace[W( ˆ − C)WD] = 0,
                      ˆ                                                                    (20)

      ˆ
where D is a positive semidefinite matrix.

Theorem 3 (Dual generalised weighted minimum trace). For 0 < < 1, = /( − 1)
                 √          √
and W = diag( p1 , . . . , pq ), where pi are non-zero probabilities adding to 1, (18),
(19), (20) are a set of sufficient conditions for a positive semidefinite matrix with at least
one non-zero diagonal element to give a solution for

                                        1
      inf[trace[WCWD] − 1 −                 ln[trace[W(diag(D)) W]]                        (21)


over all positive semidefinite D. Condition (19) can also be written:


      ˆ  (diag( ˆ )) −1
      D=                .                                                                  (22)
         trace W W
382                     M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383


5. S code

 The following S programme worked well in SPLUS6.1 on a Gateway Pentium II running
Windows 2000.

corr<-c(1,0.466,0.552,0.340,0.576,0.510,
0.466,1,0.572,0.193,0.263,0.239, 0.552,0.572,
1,0.445,0.354,0.356,0.340,0.193,0.445,1,0.184,
0.219,0.576,0.263,0.354,0.184,1,0.794, 0.510,
0.239,0.356,0.219,0.794,1);

q<-6; dim(corr)<-c(q,q);

e<-c(rnorm(q*q));dim(e)<-c(q,q); theta<-0.01;

for (ii in 1:800) {
obj<- -sum(log(diag(e%*%t(e))))
             +sum(diag(corr%*%e%*%t(e)))-q;
diff<- solve(diag(diag(e%*%t(e))))-corr;
e1<-e+theta*diff%*%e;
objnew<--sum(log(diag(e1%*%t(e1))))
         +sum(diag(corr%*%e1%*%t(e1)))-q;
if (obj > objnew) { e<-e1; theta<-2*theta}
           else {theta<-theta/2} };
test<-eigen(solve(diag(diag(e%*%t(e))))-corr)$values;
zeta<-exp(-sum(log(diag(e%*%t(e))))/2
          -sum(log(eigen(corr)$values))/2)

The algorithm seeks to increase the value of the objective function for the dual problem. The
correlation matrix is written as corr. The algorithm uses a matrix E such that D = EE to
make sure that D is positive semidefinite, and uses the derivatives [(diag EE )−1 − C]E of
the objective function with respect to the elements of E to decide on the right direction to
move E. Though it is possible that this algorithm will stick at a local optimum, one has an
automatic check on global optimality through testing ˆ = (diag D)−1 C, and can move
                                                                     ˆ
in a random direction to break the stalemate.

References

 [1] D.J. Bartholomew, M. Knott, Latent Variable Models and Factor Analysis, second ed., Kendall Library of
     Statistics, vol. 7, Arnold, London, 1999.
 [2] P.M. Bentler, A lower bound method for the dimension-free measurement of internal consistency, Soc. Sci.
     Res. 1 (1972) 343–357.
 [3] C.C. Clogg, T. Rudas, L. Xi, A new index of structure for the analysis of models for mobility tables and other
     cross-classifications, Sociol. Methodol. 25 (1995) 197–222.
 [4] I. Ekeland, R. Temam, Convex Analysis and Variational Problems, North-Holland Publishing Company,
     Amsterdam, 1976.
M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383                      383


 [5] P.R. Halmos, Finite-dimensional Vector Spaces, second ed., D. Van Nostrand Company Inc, Princeton, 1958.
 [6] M. Jamshidian, P.M. Bentler, A quasi-Newton method for minimum trace factor analysis, J. Stat. Comput.
     Simul. 62 (1998) 73–89.
 [7] G.D. Riccia, A. Shapiro, Minimum rank and minimum trace of covariance matrices, Psychometrika 47 (1982)
     443–448.
 [8] T. Rudas, The mixture index of fit and minimax regression, Metrika 50 (1999) 163–172.
 [9] T. Rudas, C.C. Clogg, B.G. Lindsay,A new index of fit based on mixture models for the analysis of contingency
     tables, J. Roy. Statist. Soc. Ser. B 56 (1994) 623–639.
[10] A. Shapiro, Rank reducibility of a symmetric matrix and sampling theory of minimum trace factor analysis,
     Psychometrika 47 (1982) 187–199.
[11] G.A. Smith, G. Stanley, Clocking g: relating intelligence and measures of timed performance, Intelligence 7
     (1983) 353–368.
[12] J.M.F. ten Berge, T.A.B. Snijders, F.E. Zegers, Computational aspects of the greatest lower bound to the
     reliability and constrained minimum trace factor analysis, Psychometrika 46 (1981) 201–213.
[13] H. Wolkowicz, R. Saigal, L. Vandenberghe (Eds.), Handbook of Semidefinite Programming Theory,
     Algorithms and Applications, Kluwer Academic Publishers, Boston, 2000.

Weitere ähnliche Inhalte

Was ist angesagt?

Bayseian decision theory
Bayseian decision theoryBayseian decision theory
Bayseian decision theorysia16
 
Estimation and Prediction of Complex Systems: Progress in Weather and Climate
Estimation and Prediction of Complex Systems: Progress in Weather and ClimateEstimation and Prediction of Complex Systems: Progress in Weather and Climate
Estimation and Prediction of Complex Systems: Progress in Weather and Climatemodons
 
Integrated Math 2 Section 3-8
Integrated Math 2 Section 3-8Integrated Math 2 Section 3-8
Integrated Math 2 Section 3-8Jimbo Lamb
 
Solutions. Design and Analysis of Experiments. Montgomery
Solutions. Design and Analysis of Experiments. MontgomerySolutions. Design and Analysis of Experiments. Montgomery
Solutions. Design and Analysis of Experiments. MontgomeryByron CZ
 
Applied statistics and probability for engineers solution montgomery && runger
Applied statistics and probability for engineers solution   montgomery && rungerApplied statistics and probability for engineers solution   montgomery && runger
Applied statistics and probability for engineers solution montgomery && rungerAnkit Katiyar
 
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMSSOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMSTahia ZERIZER
 
Lesson 12: Implicit Differentiation
Lesson 12: Implicit DifferentiationLesson 12: Implicit Differentiation
Lesson 12: Implicit DifferentiationMatthew Leingang
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classificationSung Yub Kim
 
Options Portfolio Selection
Options Portfolio SelectionOptions Portfolio Selection
Options Portfolio Selectionguasoni
 
Applied Business Statistics ,ken black , ch 3 part 2
Applied Business Statistics ,ken black , ch 3 part 2Applied Business Statistics ,ken black , ch 3 part 2
Applied Business Statistics ,ken black , ch 3 part 2AbdelmonsifFadl
 

Was ist angesagt? (19)

Bayseian decision theory
Bayseian decision theoryBayseian decision theory
Bayseian decision theory
 
Estimation and Prediction of Complex Systems: Progress in Weather and Climate
Estimation and Prediction of Complex Systems: Progress in Weather and ClimateEstimation and Prediction of Complex Systems: Progress in Weather and Climate
Estimation and Prediction of Complex Systems: Progress in Weather and Climate
 
Integrated Math 2 Section 3-8
Integrated Math 2 Section 3-8Integrated Math 2 Section 3-8
Integrated Math 2 Section 3-8
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Chapter7
Chapter7Chapter7
Chapter7
 
Chapter14
Chapter14Chapter14
Chapter14
 
Solutions. Design and Analysis of Experiments. Montgomery
Solutions. Design and Analysis of Experiments. MontgomerySolutions. Design and Analysis of Experiments. Montgomery
Solutions. Design and Analysis of Experiments. Montgomery
 
Applied statistics and probability for engineers solution montgomery && runger
Applied statistics and probability for engineers solution   montgomery && rungerApplied statistics and probability for engineers solution   montgomery && runger
Applied statistics and probability for engineers solution montgomery && runger
 
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMSSOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMS
 
9-8 Notes
9-8 Notes9-8 Notes
9-8 Notes
 
Lesson 12: Implicit Differentiation
Lesson 12: Implicit DifferentiationLesson 12: Implicit Differentiation
Lesson 12: Implicit Differentiation
 
Chapter15
Chapter15Chapter15
Chapter15
 
7주차
7주차7주차
7주차
 
A basic introduction to learning
A basic introduction to learningA basic introduction to learning
A basic introduction to learning
 
Logistics regression
Logistics regressionLogistics regression
Logistics regression
 
Anniversary2012
Anniversary2012Anniversary2012
Anniversary2012
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 
Options Portfolio Selection
Options Portfolio SelectionOptions Portfolio Selection
Options Portfolio Selection
 
Applied Business Statistics ,ken black , ch 3 part 2
Applied Business Statistics ,ken black , ch 3 part 2Applied Business Statistics ,ken black , ch 3 part 2
Applied Business Statistics ,ken black , ch 3 part 2
 

Andere mochten auch

Factor Analysis
Factor AnalysisFactor Analysis
Factor Analysisganuraga
 
Factor Analysis
Factor AnalysisFactor Analysis
Factor Analysisganuraga
 
Factor Analysis
Factor AnalysisFactor Analysis
Factor Analysisganuraga
 
Heterokesdatisitas
HeterokesdatisitasHeterokesdatisitas
Heterokesdatisitasganuraga
 
Heterokesdatisitas
HeterokesdatisitasHeterokesdatisitas
Heterokesdatisitasganuraga
 
Segmentasi Diferensiasi Positioning
Segmentasi Diferensiasi PositioningSegmentasi Diferensiasi Positioning
Segmentasi Diferensiasi Positioningganuraga
 
Makalah strategi diferensiasi
Makalah strategi diferensiasiMakalah strategi diferensiasi
Makalah strategi diferensiasiFika Ratnasari
 

Andere mochten auch (7)

Factor Analysis
Factor AnalysisFactor Analysis
Factor Analysis
 
Factor Analysis
Factor AnalysisFactor Analysis
Factor Analysis
 
Factor Analysis
Factor AnalysisFactor Analysis
Factor Analysis
 
Heterokesdatisitas
HeterokesdatisitasHeterokesdatisitas
Heterokesdatisitas
 
Heterokesdatisitas
HeterokesdatisitasHeterokesdatisitas
Heterokesdatisitas
 
Segmentasi Diferensiasi Positioning
Segmentasi Diferensiasi PositioningSegmentasi Diferensiasi Positioning
Segmentasi Diferensiasi Positioning
 
Makalah strategi diferensiasi
Makalah strategi diferensiasiMakalah strategi diferensiasi
Makalah strategi diferensiasi
 

Ähnlich wie A Measure Of Independence For A Multifariate Normal Distribution And Some Connections With Factor Analysis

A Measure Of Independence For A Multifariate Normal Distribution And Some Con...
A Measure Of Independence For A Multifariate Normal Distribution And Some Con...A Measure Of Independence For A Multifariate Normal Distribution And Some Con...
A Measure Of Independence For A Multifariate Normal Distribution And Some Con...ganuraga
 
Meanshift Tracking Presentation
Meanshift Tracking PresentationMeanshift Tracking Presentation
Meanshift Tracking Presentationsandtouch
 
Iteration method-Solution of algebraic and Transcendental Equations.
Iteration method-Solution of algebraic and Transcendental Equations.Iteration method-Solution of algebraic and Transcendental Equations.
Iteration method-Solution of algebraic and Transcendental Equations.AmitKumar8151
 
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...ieijjournal
 
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...ieijjournal
 
Large variance and fat tail of damage by natural disaster
Large variance and fat tail of damage by natural disasterLarge variance and fat tail of damage by natural disaster
Large variance and fat tail of damage by natural disasterHang-Hyun Jo
 
ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)CrackDSE
 
Jam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical StatisticsJam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical Statisticsashu29
 
Taylor Polynomials and Series
Taylor Polynomials and SeriesTaylor Polynomials and Series
Taylor Polynomials and SeriesMatthew Leingang
 
Pc12 sol c08_8-6
Pc12 sol c08_8-6Pc12 sol c08_8-6
Pc12 sol c08_8-6Garden City
 
Measures of risk on variability with application in stochastic activity networks
Measures of risk on variability with application in stochastic activity networksMeasures of risk on variability with application in stochastic activity networks
Measures of risk on variability with application in stochastic activity networksAlexander Decker
 
ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)CrackDSE
 
GATE Mathematics Paper-2000
GATE Mathematics Paper-2000GATE Mathematics Paper-2000
GATE Mathematics Paper-2000Dips Academy
 
Lecture 2-Filtering.pdf
Lecture 2-Filtering.pdfLecture 2-Filtering.pdf
Lecture 2-Filtering.pdfTechEvents1
 
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...mathsjournal
 
The Wishart and inverse-wishart distribution
 The Wishart and inverse-wishart distribution The Wishart and inverse-wishart distribution
The Wishart and inverse-wishart distributionPankaj Das
 
Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
 
Lesson 21: Curve Sketching (Section 4 version)
Lesson 21: Curve Sketching (Section 4 version)Lesson 21: Curve Sketching (Section 4 version)
Lesson 21: Curve Sketching (Section 4 version)Matthew Leingang
 
A New Iterative Method For Solving Absolute Value Equations
A New Iterative Method For Solving Absolute Value EquationsA New Iterative Method For Solving Absolute Value Equations
A New Iterative Method For Solving Absolute Value EquationsJackie Gold
 
Engr 371 final exam april 1996
Engr 371 final exam april 1996Engr 371 final exam april 1996
Engr 371 final exam april 1996amnesiann
 

Ähnlich wie A Measure Of Independence For A Multifariate Normal Distribution And Some Connections With Factor Analysis (20)

A Measure Of Independence For A Multifariate Normal Distribution And Some Con...
A Measure Of Independence For A Multifariate Normal Distribution And Some Con...A Measure Of Independence For A Multifariate Normal Distribution And Some Con...
A Measure Of Independence For A Multifariate Normal Distribution And Some Con...
 
Meanshift Tracking Presentation
Meanshift Tracking PresentationMeanshift Tracking Presentation
Meanshift Tracking Presentation
 
Iteration method-Solution of algebraic and Transcendental Equations.
Iteration method-Solution of algebraic and Transcendental Equations.Iteration method-Solution of algebraic and Transcendental Equations.
Iteration method-Solution of algebraic and Transcendental Equations.
 
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
 
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
 
Large variance and fat tail of damage by natural disaster
Large variance and fat tail of damage by natural disasterLarge variance and fat tail of damage by natural disaster
Large variance and fat tail of damage by natural disaster
 
ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)ISI MSQE Entrance Question Paper (2011)
ISI MSQE Entrance Question Paper (2011)
 
Jam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical StatisticsJam 2006 Test Papers Mathematical Statistics
Jam 2006 Test Papers Mathematical Statistics
 
Taylor Polynomials and Series
Taylor Polynomials and SeriesTaylor Polynomials and Series
Taylor Polynomials and Series
 
Pc12 sol c08_8-6
Pc12 sol c08_8-6Pc12 sol c08_8-6
Pc12 sol c08_8-6
 
Measures of risk on variability with application in stochastic activity networks
Measures of risk on variability with application in stochastic activity networksMeasures of risk on variability with application in stochastic activity networks
Measures of risk on variability with application in stochastic activity networks
 
ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)ISI MSQE Entrance Question Paper (2008)
ISI MSQE Entrance Question Paper (2008)
 
GATE Mathematics Paper-2000
GATE Mathematics Paper-2000GATE Mathematics Paper-2000
GATE Mathematics Paper-2000
 
Lecture 2-Filtering.pdf
Lecture 2-Filtering.pdfLecture 2-Filtering.pdf
Lecture 2-Filtering.pdf
 
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
 
The Wishart and inverse-wishart distribution
 The Wishart and inverse-wishart distribution The Wishart and inverse-wishart distribution
The Wishart and inverse-wishart distribution
 
Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...Sparse data formats and efficient numerical methods for uncertainties in nume...
Sparse data formats and efficient numerical methods for uncertainties in nume...
 
Lesson 21: Curve Sketching (Section 4 version)
Lesson 21: Curve Sketching (Section 4 version)Lesson 21: Curve Sketching (Section 4 version)
Lesson 21: Curve Sketching (Section 4 version)
 
A New Iterative Method For Solving Absolute Value Equations
A New Iterative Method For Solving Absolute Value EquationsA New Iterative Method For Solving Absolute Value Equations
A New Iterative Method For Solving Absolute Value Equations
 
Engr 371 final exam april 1996
Engr 371 final exam april 1996Engr 371 final exam april 1996
Engr 371 final exam april 1996
 

Kürzlich hochgeladen

Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsSeth Reyes
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintMahmoud Rabie
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024D Cloud Solutions
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...DianaGray10
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesDavid Newbury
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1DianaGray10
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-pyJamie (Taka) Wang
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxUdaiappa Ramachandran
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Commit University
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPathCommunity
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 

Kürzlich hochgeladen (20)

Computer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and HazardsComputer 10: Lesson 10 - Online Crimes and Hazards
Computer 10: Lesson 10 - Online Crimes and Hazards
 
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
201610817 - edge part1
201610817 - edge part1201610817 - edge part1
201610817 - edge part1
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
Empowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership BlueprintEmpowering Africa's Next Generation: The AI Leadership Blueprint
Empowering Africa's Next Generation: The AI Leadership Blueprint
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024Artificial Intelligence & SEO Trends for 2024
Artificial Intelligence & SEO Trends for 2024
 
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
Connector Corner: Extending LLM automation use cases with UiPath GenAI connec...
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
Linked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond OntologiesLinked Data in Production: Moving Beyond Ontologies
Linked Data in Production: Moving Beyond Ontologies
 
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1UiPath Platform: The Backend Engine Powering Your Automation - Session 1
UiPath Platform: The Backend Engine Powering Your Automation - Session 1
 
20230202 - Introduction to tis-py
20230202 - Introduction to tis-py20230202 - Introduction to tis-py
20230202 - Introduction to tis-py
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 
Building AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptxBuilding AI-Driven Apps Using Semantic Kernel.pptx
Building AI-Driven Apps Using Semantic Kernel.pptx
 
Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)Crea il tuo assistente AI con lo Stregatto (open source python framework)
Crea il tuo assistente AI con lo Stregatto (open source python framework)
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
UiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation DevelopersUiPath Community: AI for UiPath Automation Developers
UiPath Community: AI for UiPath Automation Developers
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 

A Measure Of Independence For A Multifariate Normal Distribution And Some Connections With Factor Analysis

  • 1. Journal of Multivariate Analysis 96 (2005) 374 – 383 www.elsevier.com/locate/jmva A measure of independence for a multivariate normal distribution and some connections with factor analysis Martin Knott∗ Statistics Department, London School of Economics and Political Science, Houghton Street, London WC2A2AE, UK Received 28 January 2004 Available online 7 December 2004 Abstract This paper gives results for the population value of a measure of the goodness-of-fit of a general multivariate normal distribution to the simpler hypothesis of independent normal variables. The mea- sure was introduced by Rudas, Clogg and Lindsay in 1994, who gave the value for the bivariate normal distribution. Connections with factor analysis are briefly discussed. © 2004 Elsevier Inc. All rights reserved. AMS 1991 subject classification: 62H20; 62H25; 46N10 Keywords: Goodness-of-fit; Convex optimisation; Minimum trace factor analysis 1. Introduction Rudas, Clogg and Linday [9] introduced a new index of fit for contingency table models. Their idea was to measure the population goodness-of-fit of a model by writing the popu- lation under investigation as a mixture of the population under the model and an arbitrary population. The index of fit that they proposed is the largest mixing probability that can be given to the model so that the mixture representation is feasible. Their measure is the maximum proportion of the population under investigation that can be thought of as coming ∗ Fax: +44 0171 955 7416. E-mail address: m.knott@lse.ac.uk. 0047-259X/$ - see front matter © 2004 Elsevier Inc. All rights reserved. doi:10.1016/j.jmva.2004.10.013
  • 2. M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 375 from the population under the assumptions of the model. It lies between 0 and 1, and is close to 1 for a model that fits well. To use the index in practice one must estimate it from a sample, but this paper concentrates on the population quantities. This attractive index of fit (which is applicable much more widely than to models for contingency tables) has been further investigated in [3,8]. In order to check if the measure is as attractive as it seems, it is useful to have its value worked out in a few simple cases. Rudas, Clogg and Lindsay gave the value of the index of fit, say , for a general bivariate normal distribution with correlation coefficient to the simpler model of independent normal distributions. 1 1−| | 2 = . (1) 1+| | This is a very appealing measure of independence for a bivariate normal distribution, but it would be good to see the index of fit to independence for a general q-dimensional multi- variate normal distribution. In Section 2 the problem of finding the index of fit for a general multivariate normal distribution is reduced to a one of maximising a concave function over a convex set. A dual problem is given and also extremality relations that characterise the solution. Section 3 applies the approach to obtain explicit solutions for equicorrelated nor- mal variables, and gives an example of the use of a simple algorithm to obtain the index of fit for a general multivariate normal distribution. Section 4 contains a discussion and interpretation of the results in the context of factor analysis, in particular the method of minimum trace estimation of the linear factor model. 2. Optimisation results The multivariate normal distribution will be assumed to be non-degenerate. If it has a singular covariance matrix, then one could reduce the dimension by a rotation and continue. The arguments of [9], see Eq. (12) in their Appendix A, show that is the largest number such that for some choice of the univariate normal density functions gXi (xi ) for i = 1, . . . , p, and for all x, q fX (x) gXi (xi ), (2) i=1 where fX (x) is the density function of the multivariate normal distribution of interest for the random variables X = (X1 , . . . , Xq ) taking values x = (x1 , . . . , xq ). It has been assumed implicitly that for the right-hand side of (2) no degenerate distributions are allowed for X, which is necessary to include the joint distribution of X under independence in the class of distributions allowed for the left-hand side of (2). One may therefore, without loss of generality, by making changes in location and scale on both sides of (2), take the random variables X on the left-hand side of (2) to have mean 0 and variance 1. Their correlation matrix will be written as C. It is not clear at this stage what choice of the means, say = ( 1 , . . . , q ), and the diagonal matrix of variances, say
  • 3. 376 M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 = diag( 11 , . . . , qq ), of the normal distributions on the right-hand side of (2) will allow to be attained. Inequality (2) may be written 1 1 1 1 − ln det C − x C−1 x ln − ln det − (x − ) −1 (x − ), (3) 2 2 2 2 which simplifies to 1 det 1 1 ln x (C−1 − −1 )x + −1 x− −1 . (4) 2 det C 2 2 From (4) following the same argument as in [9], we obtain only if C−1 − −1 is negative semidefinite. Let us now decide what value for gives the largest . Essentially, as a referee has pointed out, one shows that min maxx of the right-hand side of (4) is zero. If C−1 − −1 is singular, then for every x in the kernel space of C−1 − −1 it must be −1 required that x = 0, otherwise for some sufficiently long x, would be forced to be indefinitely small. So we can assume that −1 is in the image space of C−1 − −1 , and define a as an inverse image of −1 , so that (C−1 − −1 )a = −1 . (5) Then (4) becomes 1 det ln 2 det C 1 1 1 (x − a) (C−1 − −1 )(x − a) − a (C−1 − −1 )a − −1 . (6) 2 2 2 The right-hand side of (6) can be written 1 1 (x − a) (C−1 − −1 )(x − a) − a (C−1 − −1 )a 2 2 1 − a (C−1 − −1 ) (C−1 − −1 )a 2 which simplifies to 1 (x − a) (C−1 − −1 )(x − a) + a (C−1 + C−1 C−1 )a. 2 Since a (C−1 + C−1 C−1 )a is non-negative for all a, the choice a = 0 allows the largest . Putting a = 0 implies choosing = 0, as can be seen from (5). Note that one is free to choose a = 0 without consideration of (x − a) (C−1 − −1 )(x − a) because inequality (6) must be true for all choices of x, which is the same as for all choices of x − a. Returning to (4), but taking = 0, 1 det 1 ln x (C−1 − −1 )x. (7) 2 det C 2
  • 4. M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 377 Since C−1 − −1 is negative semidefinite, the inequality is at its most stringent when x = 0. So the index satisfies 1 det ln 0 (8) 2 det C −1 for all for which C−1 − is negative semidefinite. That is, is the largest value of 1 det 2 (9) det C for which = diag( 11 , . . . , qq ) satisfies the condition that C−1 − −1 is negative semidefinite. −1 To find one needs to maximise det subject to C−1 (using standard conventions for writing the Loewner partial ordering of matrices), see [5, p. 140]. The final stage of the proof will proceed by reformulating the last statement of the problem, and then using a procedure based on convex optimisation methods. −1 It is true that C−1 if and only if C [5, Exercise 13, p. 168]. So the problem can be reformulated as that of maximising ln det subject to C and diagonal. More precisely, we want to find sup ln det[ ] (10) over all diagonal positive semidefinite matrices satisfying C. (11) This form of the problem is seen as the maximisation of a concave function of the matrix over a convex set of such matrices. It is easy to see that ln det is concave and strictly decreasing on the set of positive semidefinite diagonal matrices . It is obvious that the constraint C restricts to a convex subset of all matrices . So, all the well-known results from convex optimisation can be applied. When considering a dual problem, one needs to use the norm trace(A A) for the matrix A, and the corresponding inner product. Results from, for instance, [4, p. 62–68], or from [13, Chapter 4.1], can be used to show that the problem in (10) and (11) has a solution, and that the solution is the same as that for the dual problem of finding inf[− ln det diag D + trace CD − q] (12) over all positive semidefinite matrices D. The dual problem is easier because there is no restriction on D except for positive semidef- initeness. Rather than justify in detail the application of the general convex optimisation theory, it seems better to prove directly Theorem 1. The notation diag D means the diagonal matrix with diagonal elements equal to those of D. To avoid trivialities any matrix inverse will be interpreted as a generalised inverse where necessary.
  • 5. 378 M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 Theorem 1. If the positive definite matrix ˆ and the positive semidefinite matrix D are ˆ such that ˆ = [diag D]−1 , ˆ ˆ C and satisfy the extremality relation trace[( ˆ − C)D] = 0 ˆ then ˆ gives a solution to (10), (11) and D gives a solution to (12), and the solutions are ˆ equal. The proof of Theorem 1 is as follows: From the extremality relation, ln det ˆ = −trace[( ˆ − C)D] + ln det ˆ ˆ −trace[( − C)D]ˆ + ln det for every diagonal positive definite matrix . The last inequality comes from considering maximum likelihood estimation of the variances −1 for independent normally distributed ˆ random variables with means zero and sample covariance matrix D. It follows that for all positive definite diagonal with C, ln det ˆ ln det . This is enough to prove that ˆ gives a solution to (10), (11). On the other hand, for the dual problem, ˆ ˆ − ln det diag D + trace CD − q = ˆ − ln det diag D ˆ ˆ − trace[((diag D)−1 − C)D] ˆ = − ln det diag D ˆ − ln det diag D ˆ − trace[((diag D)−1 − C)D] − ln det diag D − trace[((diag D)−1 − C)D] = − ln det diag D + trace CD − q, where D is any positive semidefinite matrix for which diag D is invertible. The last inequality is again using maximum likelihood estimation of the variances of independent normal random variables with mean zero and covariance matrix D, and shows that D gives a ˆ solution to (12). It is also clear that the solutions for primary and dual problems are equal. One other immediate consequence of Theorem 1 is that for every diagonal positive semidefinite satisfying C, and for every positive semidefinite D it is true that ln det − ln det diag D + trace CD − q, (13)
  • 6. M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 379 so it is easy to generate upper bounds for the measure of independence. The left- and right-hand side of this last inequality are also seen to be equal when = ˆ and D = D. ˆ 3. Applications As a first application the measure of independence for equally correlated multivariate normal variables will be displayed. The measure would be expected to change its form according to whether the common correlation coefficient is positive or negative, since the range of for q equally correlated normal variables is between − q−1 and 1. 1 Supposing that− q−1 < 1 0, it is easily verified that taking ˆ 1 D= 11 , 1 + (q − 1) where 1 is a q × 1 vector with all elements equal to 1, and so diag ˆ = (1 + (q − 1) )Iq gives a solution. The difference C− is equal to −q (I −11 /q) which is positive semidef- inite (and of rank (q − 1)). The measure of independence is, from (9), (q−1) 1 + (q − 1) 2 = . (14) 1− If 0 < 1, then taking ˆ q D= [I − 11 /q], (q − 1)(1 − ) gives a solution. The difference C − is equal to 11 which is positive semidefinite (and of rank 1). The measure of independence is, from (9), (q−1) 1− 2 = . (15) 1 + (q − 1) Results (14), (15) generalise the case q = 2 in [9]. Writing ( ) for evaluated for correlation coefficient , there is a curious reciprocal property that for 0, (− ) ( ) = 1. The results for equally correlated normal random variables show that as q → ∞, the measure of independence tends to 0 for fixed . It is hard to say whether that is a surprising property or not. Note that the rank of C − and also of D forˆ 0 and for 0 show examples of the extremes of its possible values. It would not be enough simply to look always for rank 1 or rank q − 1 matrices. A second application will show how easy the index is to calculate for a general correlation matrix. The correlation matrix that appears in Table 1 is from a study by Smith and Stanley [11] on the relation between reaction times and intelligence test scores. Factor analysis results for these data appear in the source paper, and in [1, p. 69–72].
  • 7. 380 M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 Table 1 Correlation coefficients for Smith and Stanley’s data 1 0.466 0.552 0.340 0.576 0.510 0.466 1 0.572 0.193 0.263 0.239 0.552 0.572 1 0.445 0.354 0.356 0.340 0.193 0.445 1 0.184 0.219 0.576 0.263 0.354 0.184 1 0.794 0.510 0.239 0.356 0.219 0.794 1 The dual problem is very well behaved, so it is easy to write a simple algorithm in SPLUS to solve it. The details are given at the end in Section 5. The measure of inde- pendence is given by this algorithm as = 0.08970137, so about 9% of the observations could be considered to be from independent normal distributions. The eigenvalues in test are 0.0001800282, −0.0007531514, −0.0683073036, −0.3883489215, −0.8617405225, ˆ −2.7810610728 showing that to the accuracy expected (diag D)−1 C, but that with two eigenvalues close to 0, the rank is effectively 4. This ties in with the presence of a Heywood case that is discovered by maximum likelihood fitting of a normal factor model to this correlation matrix, see the discussion in [1]. 4. Factor analysis There are similarities between the problem in (10), (11) and the minimum trace method of fitting factor analysis models introduced to psychometrics by [2]. Given an observed covariance matrix C, the minimum trace method finds a diagonal positive semidefinite matrix such that C and trace[C− ] is minimised. The method thus seeks to maximise trace while keeping C. It is parallel to the problem in (11) and (12), but maximises trace rather than determinant. The maximisation of trace was considered using optimisation theory by [7,10], and in a manner closer to that used in this paper by [12]. Much of the interest has been in the algorithms to find the solutions, see [6]. The matrices ˆ , D giving the solution for the minimum trace problem are not in general ˆ the same as for the problem considered in this paper. One could propose a ‘maximum determinant’ method for fitting factor analysis models, and use the methods developed in this paper carry out the fitting, but there are perhaps too many ways to fit factor analysis models already. As it happens, the matrices ˆ , D leading to the solutions for the maximum ˆ determinant for equally correlated normal variables given in Section 3 also provide a solution for the minimum trace problem. It is possible to construct a family of criteria for fitting factor analysis models that includes the maximum determinant method and the minimum trace method as special cases. All that is necessary is to maximise over = diag( 11 , 22 , . . . , qq ) 1 ln ii /q for some , 0 < 1, where as before C. The case = 1 gives the minimum trace method, while letting → 0 gives the maximum determinant method. It is even possible
  • 8. M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 381 to generalise the weighted minimum trace method introduced by Shapiro [10] by using the family of criteria 1 ln pi ii , (16) where pi are non-zero probabilities that sum to 1. Results for this extension corresponding to Theorem 1 are given in Theorems 2 and 3, which are offered without proof, but which can be proved in a manner not too dissimilar to Theorem 1. Theorem 2 (Primal generalised weighted minimum trace). For 0 < < 1, = −1 and √ √ W = diag( p1 , . . . , pq ), where pi are non-zero probabilities adding to 1, (18), (19), (20) are a set of sufficient conditions for a diagonal positive semidefinite matrix ˆ = diag( ˆ 11 , ˆ 22 , . . . , ˆ qq ) to give a solution for 1 sup ln[trace W W] (17) over all positive semidefinite diagonal C. ˆ C (18) for s = 1, . . . , q ˆ (diag D) −1 ˆ = , (19) trace W(diag(D)) W trace[W( ˆ − C)WD] = 0, ˆ (20) ˆ where D is a positive semidefinite matrix. Theorem 3 (Dual generalised weighted minimum trace). For 0 < < 1, = /( − 1) √ √ and W = diag( p1 , . . . , pq ), where pi are non-zero probabilities adding to 1, (18), (19), (20) are a set of sufficient conditions for a positive semidefinite matrix with at least one non-zero diagonal element to give a solution for 1 inf[trace[WCWD] − 1 − ln[trace[W(diag(D)) W]] (21) over all positive semidefinite D. Condition (19) can also be written: ˆ (diag( ˆ )) −1 D= . (22) trace W W
  • 9. 382 M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 5. S code The following S programme worked well in SPLUS6.1 on a Gateway Pentium II running Windows 2000. corr<-c(1,0.466,0.552,0.340,0.576,0.510, 0.466,1,0.572,0.193,0.263,0.239, 0.552,0.572, 1,0.445,0.354,0.356,0.340,0.193,0.445,1,0.184, 0.219,0.576,0.263,0.354,0.184,1,0.794, 0.510, 0.239,0.356,0.219,0.794,1); q<-6; dim(corr)<-c(q,q); e<-c(rnorm(q*q));dim(e)<-c(q,q); theta<-0.01; for (ii in 1:800) { obj<- -sum(log(diag(e%*%t(e)))) +sum(diag(corr%*%e%*%t(e)))-q; diff<- solve(diag(diag(e%*%t(e))))-corr; e1<-e+theta*diff%*%e; objnew<--sum(log(diag(e1%*%t(e1)))) +sum(diag(corr%*%e1%*%t(e1)))-q; if (obj > objnew) { e<-e1; theta<-2*theta} else {theta<-theta/2} }; test<-eigen(solve(diag(diag(e%*%t(e))))-corr)$values; zeta<-exp(-sum(log(diag(e%*%t(e))))/2 -sum(log(eigen(corr)$values))/2) The algorithm seeks to increase the value of the objective function for the dual problem. The correlation matrix is written as corr. The algorithm uses a matrix E such that D = EE to make sure that D is positive semidefinite, and uses the derivatives [(diag EE )−1 − C]E of the objective function with respect to the elements of E to decide on the right direction to move E. Though it is possible that this algorithm will stick at a local optimum, one has an automatic check on global optimality through testing ˆ = (diag D)−1 C, and can move ˆ in a random direction to break the stalemate. References [1] D.J. Bartholomew, M. Knott, Latent Variable Models and Factor Analysis, second ed., Kendall Library of Statistics, vol. 7, Arnold, London, 1999. [2] P.M. Bentler, A lower bound method for the dimension-free measurement of internal consistency, Soc. Sci. Res. 1 (1972) 343–357. [3] C.C. Clogg, T. Rudas, L. Xi, A new index of structure for the analysis of models for mobility tables and other cross-classifications, Sociol. Methodol. 25 (1995) 197–222. [4] I. Ekeland, R. Temam, Convex Analysis and Variational Problems, North-Holland Publishing Company, Amsterdam, 1976.
  • 10. M. Knott / Journal of Multivariate Analysis 96 (2005) 374 – 383 383 [5] P.R. Halmos, Finite-dimensional Vector Spaces, second ed., D. Van Nostrand Company Inc, Princeton, 1958. [6] M. Jamshidian, P.M. Bentler, A quasi-Newton method for minimum trace factor analysis, J. Stat. Comput. Simul. 62 (1998) 73–89. [7] G.D. Riccia, A. Shapiro, Minimum rank and minimum trace of covariance matrices, Psychometrika 47 (1982) 443–448. [8] T. Rudas, The mixture index of fit and minimax regression, Metrika 50 (1999) 163–172. [9] T. Rudas, C.C. Clogg, B.G. Lindsay,A new index of fit based on mixture models for the analysis of contingency tables, J. Roy. Statist. Soc. Ser. B 56 (1994) 623–639. [10] A. Shapiro, Rank reducibility of a symmetric matrix and sampling theory of minimum trace factor analysis, Psychometrika 47 (1982) 187–199. [11] G.A. Smith, G. Stanley, Clocking g: relating intelligence and measures of timed performance, Intelligence 7 (1983) 353–368. [12] J.M.F. ten Berge, T.A.B. Snijders, F.E. Zegers, Computational aspects of the greatest lower bound to the reliability and constrained minimum trace factor analysis, Psychometrika 46 (1981) 201–213. [13] H. Wolkowicz, R. Saigal, L. Vandenberghe (Eds.), Handbook of Semidefinite Programming Theory, Algorithms and Applications, Kluwer Academic Publishers, Boston, 2000.