SlideShare ist ein Scribd-Unternehmen logo
1 von 69
Downloaden Sie, um offline zu lesen
Cosmo-not:
    a brief look at methods of
 analysis in functional MRI and
in diffusion tensor imaging (DTI)
               Paul Taylor
              AIMS, UMDNJ


      Cosmology seminar, Nov. 2012
Outline
•   FMRI and DTI described (briefly)
•   Granger Causality
•   PCA
•   ICA
    – Individual, group, covariance networks
• Jackknifing/bootstrapping
The brain in brief (large scales)
has many parts-
   complex
   blood vessels
   neurons
   aqueous tissue
   (GM, WM, CSF)

activity (examples):
    hydrodynamics
    electrical impulses
    chemical
The brain in brief (large scales)
has many parts-
   complex
   blood vessels
   neurons
   aqueous tissue
   (GM, WM, CSF)

activity (examples):
    hydrodynamics
    electrical impulses
    chemical
how do different parts/areas work together?
A) observe various parts acting together in unison during some
   activities (functional relation -> fMRI)
B) follow structural connections, esp. due to WM tracts, which affect
   random motion in fluid/aqueous tissue (-> DTI, DSI, et al.)
Functional (GM)
Example:
Resting state
networks




 Biswal et al.
 (2010, PNAS)



   GM ROIs in networks: spatially distinct regions working in concert
Basic fMRI
• General topic of functional MRI:
   – Segment the brain into ‘functional networks’ for various tasks
   – Motor, auditory, vision, memory, executive control, etc.
   – Quantify, track changes, compare populations (HC vs disorder)
Basic fMRI
• General topic of functional MRI:
   – Segment the brain into ‘functional networks’ for various tasks
   – Motor, auditory, vision, memory, executive control, etc.
   – Quantify, track changes, compare populations (HC vs disorder)
• Try to study which regions have ‘active’ neurons
   – Modalities for measuring metabolism directly include PET scan
Basic fMRI
• General topic of functional MRI:
   – Segment the brain into ‘functional networks’ for various tasks
   – Motor, auditory, vision, memory, executive control, etc.
   – Quantify, track changes, compare populations (HC vs disorder)
• Try to study which regions have ‘active’ neurons
   – Modalities for measuring metabolism directly include PET scan
• With fMRI, use an indirect measure of blood oxygenation
MRI vs. fMRI
               MRI                          fMRI




              one image




                                                         …
fMRI
 Blood Oxygenation Level Dependent (BOLD) signal
     indirect measure of neural activity

 ↑ neural activity     ↑ blood oxygen  ↑ fMRI signal
BOLD signal
                         Blood Oxygen Level Dependent signal

neural activity  ↑ blood flow  ↑ oxyhemoglobin  ↑ T2*  ↑ MR signal

                                                      Mxy
                                                     Signal
                                                                 Mo
                                                                 sinθ      T2* task
                                                                                T2* control
                                                      Stask
                                                      Scontrol                               ΔS



                                                                        TEoptimum       time




        Source: fMRIB Brief Introduction to fMRI                    Source: Jorge Jovicich
Basic fMRI
      Step 1: Person is told to perform a task,
      maybe tapping fingers, in a time-varying
      pattern:

ON
OFF

          30s     30s 30s      30s 30s     30s
Basic fMRI
Step 2: we measure
a signal from each
brain voxel over time

signal: basically,
local increase in
oxygenation: idea that
neurons which are
active are hungrier, and
demand an increase in
food (oxygen)



      (example slice of
      time series)
Basic fMRI
Step 3: we
compare brain
output signals to
stimulus/input
signal

looking for: strong
similarity
(correlation)
First Functional Images




Source: Kwong et al., 1992
Basic fMRI
Step 4: map out regions of significant correlation (yellow/red)
and anti-correlation (blue), which we take to be some
involved in specific task given (to some degree); these areas
are then taken to be ‘functionally’ related networks
Basic fMRI
• Have several types of tasks:
   – Again: motor, auditory, vision, memory, executive control, etc.
   – Could investigate network by network…
Basic fMRI
• Have several types of tasks:
   – Again: motor, auditory, vision, memory, executive control, etc.
   – Could investigate network by network…
• Or, has been noticed that correlations among network
  ROIs exist even during rest
   –   Subset of functional MRI called resting state fMRI (rs-fMRI)
   –   First noticed by Biswal et al. (1995)
   –   Main rs-fMRI signals exist in 0.01-0.1 Hz range
   –   Offer way to study several networks at once
Basic rs-fMRI
e.g., Functional Connectome Project resting state networks (Biswal et al.,
     2010):
Granger Causality
• Issue to address: want to find relations
  between time series- does one affect another
  directly? Using time-lagged relations, can try
  to infer ‘causality’ (Granger 1969) (NB: careful
  in what one means by causal here…).
Granger Causality
• Issue to address: want to find relations
  between time series- does one affect another
  directly? Using time-lagged relations, can try
  to infer ‘causality’ (Granger 1969) (NB: careful
  in what one means by causal here…).



• Modelling a measured time series x(t) as
  potentially autoregressive (first sum) and with
  time-lagged contributions of other time series
  y(i)
   – u(t) are errors/‘noise’ features, and c1 is baseline
Granger Causality
• Calculation:

   – Residual from:

   – Is compared with that of:


   – And put into an F-test:




   – (T= number of time points, p the lag)
   – Model order determined with Akaike Info. Criterion or Bayeian
     Info. Criterion (AIC and BIC, respectively)
Granger Causality
• Results, for example, in directed graphs:




                              (Rypma et al. 2006)
PCA
• Principal Component Analysis (PCA): can treat
  FMRI dataset (3spatial+1time dimensions) as a 2D
  matrix (voxels x time).
  – Then, want to decompose it into spatial maps (~functional
    networks) with associated time series
  – goal of finding components which explain max/most of
    variance of dataset
  – Essentially, ‘eigen’-problem, use SVD to find
    eigenmodes, with associated vectors determining relative
    variance explained
PCA
• To calculate from (centred) dataset M with N
  columns:
  – Make correlation matrix:
     • C = M MT /(N-1)
  – Calculate eigenvectors Ei and -values λi from C, and the
    principal component is:
     • PCi = Ei [λI]1/2
PCA
• To calculate from (centred) dataset M with N
  columns:
  – Make correlation matrix:
     • C = M MT /(N-1)
  – Calculate eigenvectors Ei and -values λi from C, and the
    principal component is:
     • PCi = Ei [λI]1/2
• For FMRI, this can yield spatial/temporal
  decomposition of dataset, with eigenvectors
  showing principal spatial maps (and associated time
  series), and the relative contribution of each
  component to total variance
PCA
• Graphic example: finding directions of maximum
  variance for 2 sources




                             (example from web)
PCA



• (go to PCA reconstruction example in
  action from
  http://www.fil.ion.ucl.ac.uk/~wpenny/mbi/)
ICA
• Independent
  component analysis
  (ICA) (McKeown et al.
  1998; Calhoun et al.
  2002) is a method for
  decomposing a ‘mixed’
  MRI signal into
  separate (statistically)
  independent
  components.



(NB: ICA~ known ‘blind
source separation’ or
‘cocktail party’ problems)         (McKeown et al. 1998)
ICA
• ICA in brief (excellent discussion, see Hyvarinen & Oja 2000):
   – ICA basically is undoing Central Limit Theorem
       • CLT: sum of independent variables with randomness -> Gaussianity
       • Therefore, to decompose the mixture, find components with
         maximal non-Gaussianity
   – Several methods exist, essentially based on which function is powering
     the decomposition (i.e., by what quantity is non-Gaussianity measured):
     kurtosis, negentropy, pseudo-negentropy, mutual information, max.
     likelihood/infomax (latter used by McKeown et al. 1998 in fMRI)
ICA
• ICA in brief (excellent discussion, see Hyvarinen & Oja 2000):
   – ICA basically is undoing Central Limit Theorem
       • CLT: sum of independent variables with randomness -> Gaussianity
       • Therefore, to decompose the mixture, find components with
         maximal non-Gaussianity
   – Several methods exist, essentially based on which function is powering
     the decomposition (i.e., by what quantity is non-Gaussianity measured):
     kurtosis, negentropy, pseudo-negentropy, mutual information, max.
     likelihood/infomax (latter used by McKeown et al. 1998 in fMRI)

• NB: can’t determine ‘energy’/variances or order of ICs, due to
  ambiguity of matrix decomp (too much freedom to rescale
  columns or permute matrix).
   – i.e.: relative importance/magnitude of components is not known.
ICA
• Simple/standard representation of matrix
  decomposition for ICA of individual dataset:
            voxels ->                  # ICs               voxels ->




                                                   # ICs
                            time ->
  time ->




                        =                      x
                                                                       Spatial map
                                                                       (IC) of ith
                                                                       component


                                      Time series of ith component




 Have to choose number of ICs--often based on ‘knowledge’
 of system, or preliminary PCA-variance explained
ICA
• Can do group ICA, with assumptions of some
  similarity across a group to yield ‘group level’ spatial
  map
   – Very similar to individual spatial ICA, based on concatenating sets
     along time
ICA
• Can do group ICA, with assumptions of some
  similarity across a group to yield ‘group level’ spatial
  map
                       – Very similar to individual spatial ICA, based on concatenating sets
                         along time

                       voxels ->                                       # ICs               voxels ->




                                                                                   # ICs
                                                Subjects and time ->
Subjects and time ->




                           Subject 1        =                                  x
                                                                                                                  Group
                                                                                                                  spatial map
                           Subject 2                                           Time series of ith component, S1   (IC) of ith
                                                                                                                  component


                            Subject 3                                          Time series of ith component, S2



                                                                               Time series of ith component, S3
ICA
• Group ICA example (visual paradigm)




                                        (Calhoun et al. 2009)
ICA
• GLM decomp (~correlation to
  modelled/known time course)
            vs
  ICA decomp (unknown
  components-- ‘data driven’,
  assumptions of indep. sources)




                                   (images:Calhoun et al. 2009)
ICA
• GLM decomp (~correlation to      • PCA decomp (ortho.
  modelled/known time course)        directions of max
            vs                       variance; 2nd order)
  ICA decomp (unknown                            vs
  components-- ‘data driven’,        ICA decomp (directions
  assumptions of indep. sources)     of max independence;
                                     higher order)




                                           (images:Calhoun et al. 2009)
Dual Regression
• ICA is useful for finding an individual’s (independent)
  spatial/temporal maps; also for the ICs which are
  represented across a group.
   – Dual regression (Beckmann et al. 2009) is a method for taking
     that group IC and finding its associated, subject-specific IC.
Dual Regression
 • ICA is useful for finding an individual’s (independent)
   spatial/temporal maps; also for the ICs which are
   represented across a group.
    – Dual regression (Beckmann et al. 2009) is a method for taking
      that group IC and finding its associated, subject-specific IC.
 Steps:
• 1) ICA decomposition:              voxels                # ICs           voxels




                                                    time




                                                                   # ICs
                              time



   – >‘group’ time courses
     and ‘group’ spatial                                           x




                                                    time
     maps, independent
                              time




     components (ICs)



                  (graphics from ~Beckmann et al. 2009)
Dual Regression
• 2) Use group ICs as                time     # ICs           time

  regressors per




                                                      # ICs
                            voxels
  individual in GLM                                   x
   – > Time series
     associated with that
     spatial map




                                      (graphics from ~Beckmann et al. 2009)
Dual Regression
• 2) Use group ICs as                            time       # ICs              time

  regressors per




                                                                       # ICs
                                        voxels
  individual in GLM                                                    x
   – > Time series
     associated with that
     spatial map

• 3) GLM regression with             voxels                    # ICs               voxels

  time courses per




                                                                           # ICs
                              time




                                                        time
  individual                                            =                  x

   – > find each subject’s
     spatial map of that IC


                                                  (graphics from ~Beckmann et al. 2009)
Covariance networks (in brief)
• Group level analysis tool
• Take a single property across whole brain
   – That property has different values across brain (per
     subject) and across subjects (per voxel)
• Find voxels/regions (->network) in which that property
  changes similarly (-> covariance) as one goes from
  subject to subject (-> subject series)
ICA for BOLD series and FCNs
Standard BOLD          Subject series
    analysis             analysis
Covariance networks (in brief)
• Group level analysis tool
• Take a single property across whole brain
   – That property has different values across brain (per
     subject) and across subjects (per voxel)
• Find voxels/regions (->network) in which that property
  changes similarly (-> covariance) as one goes from
  subject to subject (-> subject series)
• Networks reflect shared information or single influence
  at basic/organizational level (discussed further, below).
Covariance networks (in brief)
• Can use with many different parameters, e.g.:
   – Mechelli et al. (2005): GMV
   – He et al. (2007): cortical thickness
   – Xu et al. (2009): GMV
   – Zielinski et al. (2010): GMV
   – Bergfield et al. (2010): GMV
   – Zhang et al. (2011): ALFF
   – Taylor et al. (2012): ALFF, fALFF, H, rs-fMRI mean
                       and std, GMV
   – Di et al. (2012): FDG-PET
Analysis: making subject series
•   A) Start with group of M subjects (for example, fMRI dataset)




A
         2
1             3

                  +


    4         5
Analysis: making subject series
•   A) Start with group of M subjects (for example, fMRI dataset)
•   B) Calculate a voxelwise parameter, P, producing 3D dataset per subject




A                                         B
         2
1             3
                                           Pi
                  +


    4        5
Analysis: making subject series
•   A) Start with group of M subjects (for example, fMRI dataset)
•   B) Calculate a voxelwise parameter, P, producing 3D dataset per subject
•   C) Concatenate the 3D datasets of whole group (in MNI) to form a 4D ‘subject
    series’
     – Analogous to standard ‘time series’, but now each voxel has M values of P
     – Instead of i-th ‘time point’, now have i-th subject




A                                        B             C
         2
1             3                                        n=1
                                           Pi
                                                             2
                  +                                              3
                                                                     4
    4        5                                                           5
Analysis: making subject series
•   A) Start with group of M subjects (for example, fMRI dataset)
•   B) Calculate a voxelwise parameter, P, producing 3D dataset per subject
•   C) Concatenate the 3D datasets of whole group (in MNI) to form a 4D ‘subject
    series’
     – Analogous to standard ‘time series’, but now each voxel has M values of P
     – Instead of i-th ‘time point’, now have i-th subject
•   NB: for all analyses, order of subjects is arbitrary and has no effect




A                                        B             C
         2
1             3                                        n=1
                                           Pi
                                                             2
                  +                                              3
                                                                     4
    4        5                                                           5
Analysis: making subject series
•   A) Start with group of M subjects (for example, fMRI dataset)
•   B) Calculate a voxelwise parameter, P, producing 3D dataset per subject
•   C) Concatenate the 3D datasets of whole group (in MNI) to form a 4D ‘subject
    series’
     – Analogous to standard ‘time series’, but now each voxel has M values of P
     – Instead of i-th ‘time point’, now have i-th subject
•   NB: for all analyses, order of subjects is arbitrary and has no effect
•   Can perform usual ‘time series’ analyses (correlation, ICA, etc.) on subject series



A                                           B             C
         2
1              3                                          n=1
                                             Pi
                                                                2
                   +                                                3
                                                                        4
    4         5                                                             5
Interpreting subject series
                    covariance
Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data
Say, values of ROIs X and Y correlate strongly, but neither with Z.


  X1     Y1      X2    Y2      X3     Y3      X4     Y4     X5        Y5


       Z1             Z2            Z3             Z4             Z5
Interpreting subject series
                    covariance
Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data
Say, values of ROIs X and Y correlate strongly, but neither with Z.


  X1     Y1      X2    Y2      X3     Y3      X4     Y4     X5        Y5


       Z1             Z2            Z3             Z4             Z5


 --> X and Y form ‘GMV covariance network’
Interpreting subject series
                    covariance
Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data
Say, values of ROIs X and Y correlate strongly, but neither with Z.


  X1     Y1      X2    Y2      X3     Y3      X4     Y4     X5        Y5


       Z1             Z2            Z3             Z4             Z5



Then, knowing the X-values and one Y-value (since X and Y can
have different bases/scales) can lead us to informed guesses about
the remaining Y-values, but nothing can be said about Z-values.
Interpreting subject series
                    covariance
Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data
Say, values of ROIs X and Y correlate strongly, but neither with Z.


  X1     Y1      X2    Y2      X3     Y3      X4     Y4     X5        Y5


       Z1             Z2            Z3             Z4             Z5



 Then, knowing the X-values and one Y-value (since X and Y can
 have different bases/scales) can lead us to informed guesses about
 the remaining Y-values, but nothing can be said about Z-values.
-> ROIs X and Y have information about each other even across
different subjects, while having little/none about Z.
-> X and Y must have some mutual/common influence, which Z may
not.
Interpreting covariance networks
•   Analyzing: similarity of brain structure across subjects.
Interpreting covariance networks
•   Analyzing: similarity of brain structure across subjects.
•   Null hypothesis: local brain structure due to local control, (mainly)
    independent of other regions.
     – -> would observe little/no correlation of ‘subject series’ non-locally
Interpreting covariance networks
•   Analyzing: similarity of brain structure across subjects.
•   Null hypothesis: local brain structure due to local control, (mainly)
    independent of other regions.
     – -> would observe little/no correlation of ‘subject series’ non-locally
•   Alt. Hypothesis: can have (1 or many) extended/multi-region
    influences controlling localities as general feature
     – -> can observe consistent patterns of properties as correlation of
         subject series ‘non-locally’
Interpreting covariance networks
•   Analyzing: similarity of brain structure across subjects.
•   Null hypothesis: local brain structure due to local control, (mainly)
    independent of other regions.
     – -> would observe little/no correlation of ‘subject series’ non-locally
•   Alt. Hypothesis: can have (1 or many) extended/multi-region
    influences controlling localities as general feature
     – -> can observe consistent patterns of properties as correlation of
         subject series ‘non-locally’
     – -> observed network and property are closely related
Interpreting covariance networks
•   Analyzing: similarity of brain structure across subjects.
•   Null hypothesis: local brain structure due to local control, (mainly)
    independent of other regions.
     – -> would observe little/no correlation of ‘subject series’ non-locally
•   Alt. Hypothesis: can have (1 or many) extended/multi-region
    influences controlling localities as general feature
     – -> can observe consistent patterns of properties as correlation of
         subject series ‘non-locally’
     – -> observed network and property are closely related
     – -> one network would have one organizing influence across itself
     – [-> perhaps independent networks with separate influences might
         have low/no correlation; related networks perhaps have some
         correlation].
Switching gears…
• Statistical resampling: methods for estimating
  confidence intervals for estimates
• Several kinds, two common ones in fMRI are
  jackknifing and bootstrapping (see, e.g. Efron et al.
  1982).
• Can use with fMRI, and also with DTI (~for noisy
  ellipsoid estimates-- confidence in fit parameters)
Jackknifing
• Basically, take M acquisitions
                                   e.g., M=12
Jackknifing
• Basically, take M acquisitions
                                       e.g., M=12
• Randomly select MJ < M to use
                                            MJ=9
  to calculate quantity of interest
   – standard nonlinear fits




(ellipsoid is defined
by 6 parameters of                    [D11 D22 D33 D12 D13 D23] = ....
quadratic surface)
Jackknifing
• Basically, take M acquisitions
                                       e.g., M=12
• Randomly select MJ < M to use
                                            MJ=9
  to calculate quantity of interest
   – standard nonlinear fits
• Repeatedly subsample large
  number (~103-104 times)

                                      [D11 D22 D33 D12 D13 D23] = ....
                                      [D11 D22 D33 D12 D13 D23] = ....
                                      [D11 D22 D33 D12 D13 D23] = ....
                                               ....
Jackknifing
• Basically, take M acquisitions
                                       e.g., M=12
• Randomly select MJ < M to use
                                            MJ=9
  to calculate quantity of interest
   – standard nonlinear fits
• Repeatedly subsample large
  number (~103-104 times)
• Analyze distribution of values
  for estimator (mean) and            [D11 D22 D33 D12 D13 D23] = ....
  confidence interval                 [D11 D22 D33 D12 D13 D23] = ....
   – sort/%iles
      • (not so efficient)            [D11 D22 D33 D12 D13 D23] = ....
   – if Gaussian, e.g. µ±2σ                    ....
      • simple
Jackknifing
  - quite Gaussian
  - Gaussianity, σ
      increase with
      decreasing MJ
  - µ changes little

M=32 gradients
Jackknifing
 -   not too bad with
     smaller M, even
 -   but could use
     min/max from
     distributions for
     %iles (don’t need
     to sort)
M=12 gradients
Bootstrapping
  • Similar principal to jackknifing,but need multiple copies of dataset.
A                                     B
 e.g., M=12                             e.g., M=12




C                                     D
 e.g., M=12                            e.g., M=12
Bootstrapping
 • Make an estimate from 12 measures, but randomly selected from
   each set:
A                                  B
 e.g., M=12                         e.g., M=12




C                                  D
 e.g., M=12                         e.g., M=12
Bootstrapping
 • Then select another random (complete) set, build a distribution, etc.

A                                     B
 e.g., M=12                            e.g., M=12




C                                     D
 e.g., M=12                            e.g., M=12
Summary
• There are a wide array of methods applicable to MRI analysis
   – Many of them involve statistics and are therefore always believable at face
     value.
   – The applicability of the assumptions of the underlying mathematics to the
     real situation is always key.
   – Often, in MRI, we are concerned with a ‘network’ view of regions working
     together to do certain tasks.
      • Therefore, we are interested in grouping regions together per task (as with
        PCA/ICA)
   – New approaches start now to look at temporal variance of networks (using,
     e.g., sliding window or wavelet decompositions).
   – Methods of preprocessing (noise filtering, motion correction, MRI-field
     imperfections) should also be considered as part of the methodology.

Weitere ähnliche Inhalte

Andere mochten auch

Andere mochten auch (20)

Magnetic resonance imaging
Magnetic resonance imagingMagnetic resonance imaging
Magnetic resonance imaging
 
Mri final ppt
Mri final pptMri final ppt
Mri final ppt
 
Fmri overview
Fmri overviewFmri overview
Fmri overview
 
MRI Brain
MRI BrainMRI Brain
MRI Brain
 
Intersex
IntersexIntersex
Intersex
 
Parietal lobe
Parietal lobeParietal lobe
Parietal lobe
 
fMRI preprocessing steps (in SPM8)
fMRI preprocessing steps (in SPM8)fMRI preprocessing steps (in SPM8)
fMRI preprocessing steps (in SPM8)
 
Ppt parietal lobe
Ppt parietal lobePpt parietal lobe
Ppt parietal lobe
 
fMRI Brain Imaging
fMRI Brain ImagingfMRI Brain Imaging
fMRI Brain Imaging
 
Physics of ct mri
Physics of ct mriPhysics of ct mri
Physics of ct mri
 
Transcranial direct current stimulation
Transcranial direct current stimulation Transcranial direct current stimulation
Transcranial direct current stimulation
 
Introduction to fMRI
Introduction to fMRIIntroduction to fMRI
Introduction to fMRI
 
MRI
MRIMRI
MRI
 
MRI
MRIMRI
MRI
 
Parietal lobe and its functions
Parietal lobe and its functionsParietal lobe and its functions
Parietal lobe and its functions
 
Granger Causality
Granger CausalityGranger Causality
Granger Causality
 
fMRI Presentation
fMRI PresentationfMRI Presentation
fMRI Presentation
 
Clinical application of tDCS
Clinical application of tDCSClinical application of tDCS
Clinical application of tDCS
 
MRI - magnetic Imaging resonance
MRI - magnetic Imaging resonanceMRI - magnetic Imaging resonance
MRI - magnetic Imaging resonance
 
Basics of CT & MRI
Basics of CT & MRIBasics of CT & MRI
Basics of CT & MRI
 

Ähnlich wie Cosmo-not: Methods of Analysis in fMRI and DTI

Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017MLconf
 
Structural and functional neural correlates of emotional responses to music
Structural and functional neural correlates of emotional responses to musicStructural and functional neural correlates of emotional responses to music
Structural and functional neural correlates of emotional responses to musicFacultad de Informática UCM
 
Unified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG DataUnified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG DataFedEx Institute of Technology
 
fMRI in machine learning
fMRI in machine learningfMRI in machine learning
fMRI in machine learningUjjawal
 
Computational approaches for mapping the human connectome
Computational approaches for mapping the human connectomeComputational approaches for mapping the human connectome
Computational approaches for mapping the human connectomeCameron Craddock
 
MfD_connectivity_2015_Ohrnberger_Caciagli.pptx
MfD_connectivity_2015_Ohrnberger_Caciagli.pptxMfD_connectivity_2015_Ohrnberger_Caciagli.pptx
MfD_connectivity_2015_Ohrnberger_Caciagli.pptxVenkatasubramanianKr8
 
Jennie Si: "Computing with Neural Spikes"
Jennie Si: "Computing with Neural Spikes" Jennie Si: "Computing with Neural Spikes"
Jennie Si: "Computing with Neural Spikes" ieee_cis_cyprus
 
20141003.journal club
20141003.journal club20141003.journal club
20141003.journal clubHayaru SHOUNO
 
EEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIER
EEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIEREEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIER
EEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIERhiij
 
MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...
MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...
MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...hiij
 
EEG based Motor Imagery Classification using SVM and MLP
EEG based Motor Imagery Classification using SVM and MLPEEG based Motor Imagery Classification using SVM and MLP
EEG based Motor Imagery Classification using SVM and MLPDr. Rajdeep Chatterjee
 

Ähnlich wie Cosmo-not: Methods of Analysis in fMRI and DTI (20)

01_Overview.pptx
01_Overview.pptx01_Overview.pptx
01_Overview.pptx
 
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
Irina Rish, Researcher, IBM Watson, at MLconf NYC 2017
 
Structural and functional neural correlates of emotional responses to music
Structural and functional neural correlates of emotional responses to musicStructural and functional neural correlates of emotional responses to music
Structural and functional neural correlates of emotional responses to music
 
Yeasin
YeasinYeasin
Yeasin
 
Unified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG DataUnified Framework for Learning Representation from EEG Data
Unified Framework for Learning Representation from EEG Data
 
fMRI in machine learning
fMRI in machine learningfMRI in machine learning
fMRI in machine learning
 
BCI Paper
BCI PaperBCI Paper
BCI Paper
 
Computational approaches for mapping the human connectome
Computational approaches for mapping the human connectomeComputational approaches for mapping the human connectome
Computational approaches for mapping the human connectome
 
Bx36449453
Bx36449453Bx36449453
Bx36449453
 
FMRI.ppt
FMRI.pptFMRI.ppt
FMRI.ppt
 
MfD_connectivity_2015_Ohrnberger_Caciagli.pptx
MfD_connectivity_2015_Ohrnberger_Caciagli.pptxMfD_connectivity_2015_Ohrnberger_Caciagli.pptx
MfD_connectivity_2015_Ohrnberger_Caciagli.pptx
 
currentEstimate.ppt
currentEstimate.pptcurrentEstimate.ppt
currentEstimate.ppt
 
Jennie Si: "Computing with Neural Spikes"
Jennie Si: "Computing with Neural Spikes" Jennie Si: "Computing with Neural Spikes"
Jennie Si: "Computing with Neural Spikes"
 
20141003.journal club
20141003.journal club20141003.journal club
20141003.journal club
 
EEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIER
EEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIEREEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIER
EEG SIGNAL CLASSIFICATION USING LDA AND MLP CLASSIFIER
 
MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...
MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...
MHEALTH APPLICATIONS DEVELOPED BY THE MINISTRY OF HEALTH FOR PUBLIC USERS INK...
 
F mri
F mriF mri
F mri
 
EEG based Motor Imagery Classification using SVM and MLP
EEG based Motor Imagery Classification using SVM and MLPEEG based Motor Imagery Classification using SVM and MLP
EEG based Motor Imagery Classification using SVM and MLP
 
Eeg seminar
Eeg seminarEeg seminar
Eeg seminar
 
Connectomics_Journal Club
Connectomics_Journal ClubConnectomics_Journal Club
Connectomics_Journal Club
 

Mehr von CosmoAIMS Bassett

Mauritius Big Data and Machine Learning JEDI workshop
Mauritius Big Data and Machine Learning JEDI workshopMauritius Big Data and Machine Learning JEDI workshop
Mauritius Big Data and Machine Learning JEDI workshopCosmoAIMS Bassett
 
Testing dark energy as a function of scale
Testing dark energy as a function of scaleTesting dark energy as a function of scale
Testing dark energy as a function of scaleCosmoAIMS Bassett
 
Seminar by Prof Bruce Bassett at IAP, Paris, October 2013
Seminar by Prof Bruce Bassett at IAP, Paris, October 2013Seminar by Prof Bruce Bassett at IAP, Paris, October 2013
Seminar by Prof Bruce Bassett at IAP, Paris, October 2013CosmoAIMS Bassett
 
Cosmology with the 21cm line
Cosmology with the 21cm lineCosmology with the 21cm line
Cosmology with the 21cm lineCosmoAIMS Bassett
 
Tuning your radio to the cosmic dawn
Tuning your radio to the cosmic dawnTuning your radio to the cosmic dawn
Tuning your radio to the cosmic dawnCosmoAIMS Bassett
 
A short introduction to massive gravity... or ... Can one give a mass to the ...
A short introduction to massive gravity... or ... Can one give a mass to the ...A short introduction to massive gravity... or ... Can one give a mass to the ...
A short introduction to massive gravity... or ... Can one give a mass to the ...CosmoAIMS Bassett
 
Decomposing Profiles of SDSS Galaxies
Decomposing Profiles of SDSS GalaxiesDecomposing Profiles of SDSS Galaxies
Decomposing Profiles of SDSS GalaxiesCosmoAIMS Bassett
 
Cluster abundances and clustering Can theory step up to precision cosmology?
Cluster abundances and clustering Can theory step up to precision cosmology?Cluster abundances and clustering Can theory step up to precision cosmology?
Cluster abundances and clustering Can theory step up to precision cosmology?CosmoAIMS Bassett
 
An Overview of Gravitational Lensing
An Overview of Gravitational LensingAn Overview of Gravitational Lensing
An Overview of Gravitational LensingCosmoAIMS Bassett
 
Testing cosmology with galaxy clusters, the CMB and galaxy clustering
Testing cosmology with galaxy clusters, the CMB and galaxy clusteringTesting cosmology with galaxy clusters, the CMB and galaxy clustering
Testing cosmology with galaxy clusters, the CMB and galaxy clusteringCosmoAIMS Bassett
 
Galaxy Formation: An Overview
Galaxy Formation: An OverviewGalaxy Formation: An Overview
Galaxy Formation: An OverviewCosmoAIMS Bassett
 
Spit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio Data
Spit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio DataSpit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio Data
Spit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio DataCosmoAIMS Bassett
 
From Darkness, Light: Computing Cosmological Reionization
From Darkness, Light: Computing Cosmological ReionizationFrom Darkness, Light: Computing Cosmological Reionization
From Darkness, Light: Computing Cosmological ReionizationCosmoAIMS Bassett
 
WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?
WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?
WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?CosmoAIMS Bassett
 
Binary pulsars as tools to study gravity
Binary pulsars as tools to study gravityBinary pulsars as tools to study gravity
Binary pulsars as tools to study gravityCosmoAIMS Bassett
 
Cross Matching EUCLID and SKA using the Likelihood Ratio
Cross Matching EUCLID and SKA using the Likelihood RatioCross Matching EUCLID and SKA using the Likelihood Ratio
Cross Matching EUCLID and SKA using the Likelihood RatioCosmoAIMS Bassett
 
Machine Learning Challenges in Astronomy
Machine Learning Challenges in AstronomyMachine Learning Challenges in Astronomy
Machine Learning Challenges in AstronomyCosmoAIMS Bassett
 

Mehr von CosmoAIMS Bassett (20)

Machine learning clustering
Machine learning clusteringMachine learning clustering
Machine learning clustering
 
Mauritius Big Data and Machine Learning JEDI workshop
Mauritius Big Data and Machine Learning JEDI workshopMauritius Big Data and Machine Learning JEDI workshop
Mauritius Big Data and Machine Learning JEDI workshop
 
Testing dark energy as a function of scale
Testing dark energy as a function of scaleTesting dark energy as a function of scale
Testing dark energy as a function of scale
 
Seminar by Prof Bruce Bassett at IAP, Paris, October 2013
Seminar by Prof Bruce Bassett at IAP, Paris, October 2013Seminar by Prof Bruce Bassett at IAP, Paris, October 2013
Seminar by Prof Bruce Bassett at IAP, Paris, October 2013
 
Cosmology with the 21cm line
Cosmology with the 21cm lineCosmology with the 21cm line
Cosmology with the 21cm line
 
Tuning your radio to the cosmic dawn
Tuning your radio to the cosmic dawnTuning your radio to the cosmic dawn
Tuning your radio to the cosmic dawn
 
A short introduction to massive gravity... or ... Can one give a mass to the ...
A short introduction to massive gravity... or ... Can one give a mass to the ...A short introduction to massive gravity... or ... Can one give a mass to the ...
A short introduction to massive gravity... or ... Can one give a mass to the ...
 
Decomposing Profiles of SDSS Galaxies
Decomposing Profiles of SDSS GalaxiesDecomposing Profiles of SDSS Galaxies
Decomposing Profiles of SDSS Galaxies
 
Cluster abundances and clustering Can theory step up to precision cosmology?
Cluster abundances and clustering Can theory step up to precision cosmology?Cluster abundances and clustering Can theory step up to precision cosmology?
Cluster abundances and clustering Can theory step up to precision cosmology?
 
An Overview of Gravitational Lensing
An Overview of Gravitational LensingAn Overview of Gravitational Lensing
An Overview of Gravitational Lensing
 
Testing cosmology with galaxy clusters, the CMB and galaxy clustering
Testing cosmology with galaxy clusters, the CMB and galaxy clusteringTesting cosmology with galaxy clusters, the CMB and galaxy clustering
Testing cosmology with galaxy clusters, the CMB and galaxy clustering
 
Galaxy Formation: An Overview
Galaxy Formation: An OverviewGalaxy Formation: An Overview
Galaxy Formation: An Overview
 
Spit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio Data
Spit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio DataSpit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio Data
Spit, Duct Tape, Baling Wire & Oral Tradition: Dealing With Radio Data
 
MeerKAT: an overview
MeerKAT: an overviewMeerKAT: an overview
MeerKAT: an overview
 
Casa cookbook for KAT 7
Casa cookbook for KAT 7Casa cookbook for KAT 7
Casa cookbook for KAT 7
 
From Darkness, Light: Computing Cosmological Reionization
From Darkness, Light: Computing Cosmological ReionizationFrom Darkness, Light: Computing Cosmological Reionization
From Darkness, Light: Computing Cosmological Reionization
 
WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?
WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?
WHAT CAN WE DEDUCE FROM STUDIES OF NEARBY GALAXY POPULATIONS?
 
Binary pulsars as tools to study gravity
Binary pulsars as tools to study gravityBinary pulsars as tools to study gravity
Binary pulsars as tools to study gravity
 
Cross Matching EUCLID and SKA using the Likelihood Ratio
Cross Matching EUCLID and SKA using the Likelihood RatioCross Matching EUCLID and SKA using the Likelihood Ratio
Cross Matching EUCLID and SKA using the Likelihood Ratio
 
Machine Learning Challenges in Astronomy
Machine Learning Challenges in AstronomyMachine Learning Challenges in Astronomy
Machine Learning Challenges in Astronomy
 

Cosmo-not: Methods of Analysis in fMRI and DTI

  • 1. Cosmo-not: a brief look at methods of analysis in functional MRI and in diffusion tensor imaging (DTI) Paul Taylor AIMS, UMDNJ Cosmology seminar, Nov. 2012
  • 2. Outline • FMRI and DTI described (briefly) • Granger Causality • PCA • ICA – Individual, group, covariance networks • Jackknifing/bootstrapping
  • 3. The brain in brief (large scales) has many parts- complex blood vessels neurons aqueous tissue (GM, WM, CSF) activity (examples): hydrodynamics electrical impulses chemical
  • 4. The brain in brief (large scales) has many parts- complex blood vessels neurons aqueous tissue (GM, WM, CSF) activity (examples): hydrodynamics electrical impulses chemical how do different parts/areas work together? A) observe various parts acting together in unison during some activities (functional relation -> fMRI) B) follow structural connections, esp. due to WM tracts, which affect random motion in fluid/aqueous tissue (-> DTI, DSI, et al.)
  • 5. Functional (GM) Example: Resting state networks Biswal et al. (2010, PNAS) GM ROIs in networks: spatially distinct regions working in concert
  • 6. Basic fMRI • General topic of functional MRI: – Segment the brain into ‘functional networks’ for various tasks – Motor, auditory, vision, memory, executive control, etc. – Quantify, track changes, compare populations (HC vs disorder)
  • 7. Basic fMRI • General topic of functional MRI: – Segment the brain into ‘functional networks’ for various tasks – Motor, auditory, vision, memory, executive control, etc. – Quantify, track changes, compare populations (HC vs disorder) • Try to study which regions have ‘active’ neurons – Modalities for measuring metabolism directly include PET scan
  • 8. Basic fMRI • General topic of functional MRI: – Segment the brain into ‘functional networks’ for various tasks – Motor, auditory, vision, memory, executive control, etc. – Quantify, track changes, compare populations (HC vs disorder) • Try to study which regions have ‘active’ neurons – Modalities for measuring metabolism directly include PET scan • With fMRI, use an indirect measure of blood oxygenation
  • 9. MRI vs. fMRI MRI fMRI one image … fMRI Blood Oxygenation Level Dependent (BOLD) signal indirect measure of neural activity ↑ neural activity  ↑ blood oxygen  ↑ fMRI signal
  • 10. BOLD signal Blood Oxygen Level Dependent signal neural activity  ↑ blood flow  ↑ oxyhemoglobin  ↑ T2*  ↑ MR signal Mxy Signal Mo sinθ T2* task T2* control Stask Scontrol ΔS TEoptimum time Source: fMRIB Brief Introduction to fMRI Source: Jorge Jovicich
  • 11. Basic fMRI Step 1: Person is told to perform a task, maybe tapping fingers, in a time-varying pattern: ON OFF 30s 30s 30s 30s 30s 30s
  • 12. Basic fMRI Step 2: we measure a signal from each brain voxel over time signal: basically, local increase in oxygenation: idea that neurons which are active are hungrier, and demand an increase in food (oxygen) (example slice of time series)
  • 13. Basic fMRI Step 3: we compare brain output signals to stimulus/input signal looking for: strong similarity (correlation)
  • 14. First Functional Images Source: Kwong et al., 1992
  • 15. Basic fMRI Step 4: map out regions of significant correlation (yellow/red) and anti-correlation (blue), which we take to be some involved in specific task given (to some degree); these areas are then taken to be ‘functionally’ related networks
  • 16. Basic fMRI • Have several types of tasks: – Again: motor, auditory, vision, memory, executive control, etc. – Could investigate network by network…
  • 17. Basic fMRI • Have several types of tasks: – Again: motor, auditory, vision, memory, executive control, etc. – Could investigate network by network… • Or, has been noticed that correlations among network ROIs exist even during rest – Subset of functional MRI called resting state fMRI (rs-fMRI) – First noticed by Biswal et al. (1995) – Main rs-fMRI signals exist in 0.01-0.1 Hz range – Offer way to study several networks at once
  • 18. Basic rs-fMRI e.g., Functional Connectome Project resting state networks (Biswal et al., 2010):
  • 19. Granger Causality • Issue to address: want to find relations between time series- does one affect another directly? Using time-lagged relations, can try to infer ‘causality’ (Granger 1969) (NB: careful in what one means by causal here…).
  • 20. Granger Causality • Issue to address: want to find relations between time series- does one affect another directly? Using time-lagged relations, can try to infer ‘causality’ (Granger 1969) (NB: careful in what one means by causal here…). • Modelling a measured time series x(t) as potentially autoregressive (first sum) and with time-lagged contributions of other time series y(i) – u(t) are errors/‘noise’ features, and c1 is baseline
  • 21. Granger Causality • Calculation: – Residual from: – Is compared with that of: – And put into an F-test: – (T= number of time points, p the lag) – Model order determined with Akaike Info. Criterion or Bayeian Info. Criterion (AIC and BIC, respectively)
  • 22. Granger Causality • Results, for example, in directed graphs: (Rypma et al. 2006)
  • 23. PCA • Principal Component Analysis (PCA): can treat FMRI dataset (3spatial+1time dimensions) as a 2D matrix (voxels x time). – Then, want to decompose it into spatial maps (~functional networks) with associated time series – goal of finding components which explain max/most of variance of dataset – Essentially, ‘eigen’-problem, use SVD to find eigenmodes, with associated vectors determining relative variance explained
  • 24. PCA • To calculate from (centred) dataset M with N columns: – Make correlation matrix: • C = M MT /(N-1) – Calculate eigenvectors Ei and -values λi from C, and the principal component is: • PCi = Ei [λI]1/2
  • 25. PCA • To calculate from (centred) dataset M with N columns: – Make correlation matrix: • C = M MT /(N-1) – Calculate eigenvectors Ei and -values λi from C, and the principal component is: • PCi = Ei [λI]1/2 • For FMRI, this can yield spatial/temporal decomposition of dataset, with eigenvectors showing principal spatial maps (and associated time series), and the relative contribution of each component to total variance
  • 26. PCA • Graphic example: finding directions of maximum variance for 2 sources (example from web)
  • 27. PCA • (go to PCA reconstruction example in action from http://www.fil.ion.ucl.ac.uk/~wpenny/mbi/)
  • 28. ICA • Independent component analysis (ICA) (McKeown et al. 1998; Calhoun et al. 2002) is a method for decomposing a ‘mixed’ MRI signal into separate (statistically) independent components. (NB: ICA~ known ‘blind source separation’ or ‘cocktail party’ problems) (McKeown et al. 1998)
  • 29. ICA • ICA in brief (excellent discussion, see Hyvarinen & Oja 2000): – ICA basically is undoing Central Limit Theorem • CLT: sum of independent variables with randomness -> Gaussianity • Therefore, to decompose the mixture, find components with maximal non-Gaussianity – Several methods exist, essentially based on which function is powering the decomposition (i.e., by what quantity is non-Gaussianity measured): kurtosis, negentropy, pseudo-negentropy, mutual information, max. likelihood/infomax (latter used by McKeown et al. 1998 in fMRI)
  • 30. ICA • ICA in brief (excellent discussion, see Hyvarinen & Oja 2000): – ICA basically is undoing Central Limit Theorem • CLT: sum of independent variables with randomness -> Gaussianity • Therefore, to decompose the mixture, find components with maximal non-Gaussianity – Several methods exist, essentially based on which function is powering the decomposition (i.e., by what quantity is non-Gaussianity measured): kurtosis, negentropy, pseudo-negentropy, mutual information, max. likelihood/infomax (latter used by McKeown et al. 1998 in fMRI) • NB: can’t determine ‘energy’/variances or order of ICs, due to ambiguity of matrix decomp (too much freedom to rescale columns or permute matrix). – i.e.: relative importance/magnitude of components is not known.
  • 31. ICA • Simple/standard representation of matrix decomposition for ICA of individual dataset: voxels -> # ICs voxels -> # ICs time -> time -> = x Spatial map (IC) of ith component Time series of ith component Have to choose number of ICs--often based on ‘knowledge’ of system, or preliminary PCA-variance explained
  • 32. ICA • Can do group ICA, with assumptions of some similarity across a group to yield ‘group level’ spatial map – Very similar to individual spatial ICA, based on concatenating sets along time
  • 33. ICA • Can do group ICA, with assumptions of some similarity across a group to yield ‘group level’ spatial map – Very similar to individual spatial ICA, based on concatenating sets along time voxels -> # ICs voxels -> # ICs Subjects and time -> Subjects and time -> Subject 1 = x Group spatial map Subject 2 Time series of ith component, S1 (IC) of ith component Subject 3 Time series of ith component, S2 Time series of ith component, S3
  • 34. ICA • Group ICA example (visual paradigm) (Calhoun et al. 2009)
  • 35. ICA • GLM decomp (~correlation to modelled/known time course) vs ICA decomp (unknown components-- ‘data driven’, assumptions of indep. sources) (images:Calhoun et al. 2009)
  • 36. ICA • GLM decomp (~correlation to • PCA decomp (ortho. modelled/known time course) directions of max vs variance; 2nd order) ICA decomp (unknown vs components-- ‘data driven’, ICA decomp (directions assumptions of indep. sources) of max independence; higher order) (images:Calhoun et al. 2009)
  • 37. Dual Regression • ICA is useful for finding an individual’s (independent) spatial/temporal maps; also for the ICs which are represented across a group. – Dual regression (Beckmann et al. 2009) is a method for taking that group IC and finding its associated, subject-specific IC.
  • 38. Dual Regression • ICA is useful for finding an individual’s (independent) spatial/temporal maps; also for the ICs which are represented across a group. – Dual regression (Beckmann et al. 2009) is a method for taking that group IC and finding its associated, subject-specific IC. Steps: • 1) ICA decomposition: voxels # ICs voxels time # ICs time – >‘group’ time courses and ‘group’ spatial x time maps, independent time components (ICs) (graphics from ~Beckmann et al. 2009)
  • 39. Dual Regression • 2) Use group ICs as time # ICs time regressors per # ICs voxels individual in GLM x – > Time series associated with that spatial map (graphics from ~Beckmann et al. 2009)
  • 40. Dual Regression • 2) Use group ICs as time # ICs time regressors per # ICs voxels individual in GLM x – > Time series associated with that spatial map • 3) GLM regression with voxels # ICs voxels time courses per # ICs time time individual = x – > find each subject’s spatial map of that IC (graphics from ~Beckmann et al. 2009)
  • 41. Covariance networks (in brief) • Group level analysis tool • Take a single property across whole brain – That property has different values across brain (per subject) and across subjects (per voxel) • Find voxels/regions (->network) in which that property changes similarly (-> covariance) as one goes from subject to subject (-> subject series)
  • 42. ICA for BOLD series and FCNs Standard BOLD Subject series analysis analysis
  • 43. Covariance networks (in brief) • Group level analysis tool • Take a single property across whole brain – That property has different values across brain (per subject) and across subjects (per voxel) • Find voxels/regions (->network) in which that property changes similarly (-> covariance) as one goes from subject to subject (-> subject series) • Networks reflect shared information or single influence at basic/organizational level (discussed further, below).
  • 44. Covariance networks (in brief) • Can use with many different parameters, e.g.: – Mechelli et al. (2005): GMV – He et al. (2007): cortical thickness – Xu et al. (2009): GMV – Zielinski et al. (2010): GMV – Bergfield et al. (2010): GMV – Zhang et al. (2011): ALFF – Taylor et al. (2012): ALFF, fALFF, H, rs-fMRI mean and std, GMV – Di et al. (2012): FDG-PET
  • 45. Analysis: making subject series • A) Start with group of M subjects (for example, fMRI dataset) A 2 1 3 + 4 5
  • 46. Analysis: making subject series • A) Start with group of M subjects (for example, fMRI dataset) • B) Calculate a voxelwise parameter, P, producing 3D dataset per subject A B 2 1 3 Pi + 4 5
  • 47. Analysis: making subject series • A) Start with group of M subjects (for example, fMRI dataset) • B) Calculate a voxelwise parameter, P, producing 3D dataset per subject • C) Concatenate the 3D datasets of whole group (in MNI) to form a 4D ‘subject series’ – Analogous to standard ‘time series’, but now each voxel has M values of P – Instead of i-th ‘time point’, now have i-th subject A B C 2 1 3 n=1 Pi 2 + 3 4 4 5 5
  • 48. Analysis: making subject series • A) Start with group of M subjects (for example, fMRI dataset) • B) Calculate a voxelwise parameter, P, producing 3D dataset per subject • C) Concatenate the 3D datasets of whole group (in MNI) to form a 4D ‘subject series’ – Analogous to standard ‘time series’, but now each voxel has M values of P – Instead of i-th ‘time point’, now have i-th subject • NB: for all analyses, order of subjects is arbitrary and has no effect A B C 2 1 3 n=1 Pi 2 + 3 4 4 5 5
  • 49. Analysis: making subject series • A) Start with group of M subjects (for example, fMRI dataset) • B) Calculate a voxelwise parameter, P, producing 3D dataset per subject • C) Concatenate the 3D datasets of whole group (in MNI) to form a 4D ‘subject series’ – Analogous to standard ‘time series’, but now each voxel has M values of P – Instead of i-th ‘time point’, now have i-th subject • NB: for all analyses, order of subjects is arbitrary and has no effect • Can perform usual ‘time series’ analyses (correlation, ICA, etc.) on subject series A B C 2 1 3 n=1 Pi 2 + 3 4 4 5 5
  • 50. Interpreting subject series covariance Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data Say, values of ROIs X and Y correlate strongly, but neither with Z. X1 Y1 X2 Y2 X3 Y3 X4 Y4 X5 Y5 Z1 Z2 Z3 Z4 Z5
  • 51. Interpreting subject series covariance Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data Say, values of ROIs X and Y correlate strongly, but neither with Z. X1 Y1 X2 Y2 X3 Y3 X4 Y4 X5 Y5 Z1 Z2 Z3 Z4 Z5 --> X and Y form ‘GMV covariance network’
  • 52. Interpreting subject series covariance Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data Say, values of ROIs X and Y correlate strongly, but neither with Z. X1 Y1 X2 Y2 X3 Y3 X4 Y4 X5 Y5 Z1 Z2 Z3 Z4 Z5 Then, knowing the X-values and one Y-value (since X and Y can have different bases/scales) can lead us to informed guesses about the remaining Y-values, but nothing can be said about Z-values.
  • 53. Interpreting subject series covariance Ex.: Consider 3 ROIs (X, Y and Z) in subjects with GMV data Say, values of ROIs X and Y correlate strongly, but neither with Z. X1 Y1 X2 Y2 X3 Y3 X4 Y4 X5 Y5 Z1 Z2 Z3 Z4 Z5 Then, knowing the X-values and one Y-value (since X and Y can have different bases/scales) can lead us to informed guesses about the remaining Y-values, but nothing can be said about Z-values. -> ROIs X and Y have information about each other even across different subjects, while having little/none about Z. -> X and Y must have some mutual/common influence, which Z may not.
  • 54. Interpreting covariance networks • Analyzing: similarity of brain structure across subjects.
  • 55. Interpreting covariance networks • Analyzing: similarity of brain structure across subjects. • Null hypothesis: local brain structure due to local control, (mainly) independent of other regions. – -> would observe little/no correlation of ‘subject series’ non-locally
  • 56. Interpreting covariance networks • Analyzing: similarity of brain structure across subjects. • Null hypothesis: local brain structure due to local control, (mainly) independent of other regions. – -> would observe little/no correlation of ‘subject series’ non-locally • Alt. Hypothesis: can have (1 or many) extended/multi-region influences controlling localities as general feature – -> can observe consistent patterns of properties as correlation of subject series ‘non-locally’
  • 57. Interpreting covariance networks • Analyzing: similarity of brain structure across subjects. • Null hypothesis: local brain structure due to local control, (mainly) independent of other regions. – -> would observe little/no correlation of ‘subject series’ non-locally • Alt. Hypothesis: can have (1 or many) extended/multi-region influences controlling localities as general feature – -> can observe consistent patterns of properties as correlation of subject series ‘non-locally’ – -> observed network and property are closely related
  • 58. Interpreting covariance networks • Analyzing: similarity of brain structure across subjects. • Null hypothesis: local brain structure due to local control, (mainly) independent of other regions. – -> would observe little/no correlation of ‘subject series’ non-locally • Alt. Hypothesis: can have (1 or many) extended/multi-region influences controlling localities as general feature – -> can observe consistent patterns of properties as correlation of subject series ‘non-locally’ – -> observed network and property are closely related – -> one network would have one organizing influence across itself – [-> perhaps independent networks with separate influences might have low/no correlation; related networks perhaps have some correlation].
  • 59. Switching gears… • Statistical resampling: methods for estimating confidence intervals for estimates • Several kinds, two common ones in fMRI are jackknifing and bootstrapping (see, e.g. Efron et al. 1982). • Can use with fMRI, and also with DTI (~for noisy ellipsoid estimates-- confidence in fit parameters)
  • 60. Jackknifing • Basically, take M acquisitions e.g., M=12
  • 61. Jackknifing • Basically, take M acquisitions e.g., M=12 • Randomly select MJ < M to use MJ=9 to calculate quantity of interest – standard nonlinear fits (ellipsoid is defined by 6 parameters of [D11 D22 D33 D12 D13 D23] = .... quadratic surface)
  • 62. Jackknifing • Basically, take M acquisitions e.g., M=12 • Randomly select MJ < M to use MJ=9 to calculate quantity of interest – standard nonlinear fits • Repeatedly subsample large number (~103-104 times) [D11 D22 D33 D12 D13 D23] = .... [D11 D22 D33 D12 D13 D23] = .... [D11 D22 D33 D12 D13 D23] = .... ....
  • 63. Jackknifing • Basically, take M acquisitions e.g., M=12 • Randomly select MJ < M to use MJ=9 to calculate quantity of interest – standard nonlinear fits • Repeatedly subsample large number (~103-104 times) • Analyze distribution of values for estimator (mean) and [D11 D22 D33 D12 D13 D23] = .... confidence interval [D11 D22 D33 D12 D13 D23] = .... – sort/%iles • (not so efficient) [D11 D22 D33 D12 D13 D23] = .... – if Gaussian, e.g. µ±2σ .... • simple
  • 64. Jackknifing - quite Gaussian - Gaussianity, σ increase with decreasing MJ - µ changes little M=32 gradients
  • 65. Jackknifing - not too bad with smaller M, even - but could use min/max from distributions for %iles (don’t need to sort) M=12 gradients
  • 66. Bootstrapping • Similar principal to jackknifing,but need multiple copies of dataset. A B e.g., M=12 e.g., M=12 C D e.g., M=12 e.g., M=12
  • 67. Bootstrapping • Make an estimate from 12 measures, but randomly selected from each set: A B e.g., M=12 e.g., M=12 C D e.g., M=12 e.g., M=12
  • 68. Bootstrapping • Then select another random (complete) set, build a distribution, etc. A B e.g., M=12 e.g., M=12 C D e.g., M=12 e.g., M=12
  • 69. Summary • There are a wide array of methods applicable to MRI analysis – Many of them involve statistics and are therefore always believable at face value. – The applicability of the assumptions of the underlying mathematics to the real situation is always key. – Often, in MRI, we are concerned with a ‘network’ view of regions working together to do certain tasks. • Therefore, we are interested in grouping regions together per task (as with PCA/ICA) – New approaches start now to look at temporal variance of networks (using, e.g., sliding window or wavelet decompositions). – Methods of preprocessing (noise filtering, motion correction, MRI-field imperfections) should also be considered as part of the methodology.