SlideShare ist ein Scribd-Unternehmen logo
1 von 33
Downloaden Sie, um offline zu lesen
Course Business
! Midterm assignment due on Canvas next
Monday at 1:30 PM
! Unsure if an article is suitable? Can run it by me
! Add-on packages to install for today:
! performance (may have gotten this last week)
! emmeans
! Lab materials on Canvas
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Unbalanced Factors
• Sometimes, we may have differing
numbers of observations per level
• Possible reasons:
• Some categories naturally more common
• e.g., college majors
• Categories may be equally common in the
population, but we have sampling error
• e.g., ended up 60% female participants, 40% male
• Study was designed so that some conditions are
more common
• e.g., more “control” subjects than “intervention” subjects
• We wanted equal numbers of observations, but lost
some because of errors or exclusion criteria
• e.g., data loss due to computer problems
• Dropping subjects below a minimum level of performance
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Weighted Coding
• “For the average student, does course size
predict probability of graduation?”
• Random sample of 200 Pitt undergrads
• 5 are student athletes and 195 are not
• How can we make the intercept reflect the
“average student”?
• We could try to apply effects coding to the
StudentAthlete variable by centering around the
mean and getting (0.5, -0.5), but…
Weighted Coding
• An intercept at 0 would no longer correspond to the
overall mean
• As a scale, this would be totally
unbalanced
• To fix balance, we need to assign
a heavier weight to Athlete
ATHLETE
(5)
NOT ATHLETE
(195)
.5 -.5
0
-.475
Zero is here
But “not athlete” is actually far more common
Weighted Coding
• Change codes so the mean is 0
• c(.975, -.025)
• contr.helmert.weighted() function in my
psycholing package will calculate this
ATHLETE
(5)
NOT ATHLETE
(195)
.975 -.025
0
Weighted Coding
• Weighted coding: Change the codes so that
the mean is 0 again
• Used when the imbalance reflects something real
• Like Type II sums of squares
• “For the average student, does course size
predict graduation rates?”
• Average student is not a student athlete, and our
answer to the question about an “average student”
should reflect this!
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Unweighted Coding
• Last week we looked at aphasia.csv:
• Response times (RT) in a sentence verification task
• Effect of SubjectType (aphasia vs. control)
• Effect of SentenceType (active vs. passive)
• And their interaction
Unweighted Coding
• Oops! Our experiment loaded up the wrong image
for one of our Passive sentences (“Groceries”)
• It may have been sabotaged
• UsableItem column is No for this item
• First, can we remove this from our data?
• Some possibilities:
• aphasia %>% filter(UsableItem == 'Yes') -> aphasia
• aphasia %>% filter(UsableItem != 'No') -> aphasia2
• etc.
Unweighted Coding
• Oops! Our experiment loaded up the wrong image
for one of our Passive sentences (“Groceries”)
• Now, there’s an imbalance, but it’s an accident and
not meaningful
• In fact, we’d like to get rid of it!
Unweighted Coding
• Oops! Our experiment loaded up the wrong image
for one of our Passive sentences (“Groceries”)
• Now, there’s an imbalance, but it’s an accident and
not meaningful
• In fact, we’d like to get rid of it!
• Retain the (-0.5, 0.5) codes
• Weights the two conditions equally—because the
imbalance isn’t meaningful
• Like Type III sums of squares
• Probably what you want for factorial experiments
Unbalanced Factors: Summary
• Weighted coding: Change the codes so that the
mean is 0
• Use when the imbalance reflects something real
• Can be done with contr.helmert.weighted()
• Unweighted coding: Keep the codes as -0.5 and
0.5
• Use when the imbalance is an accident that we
want to eliminate
• With balanced factors, these are identical
Mean across each individual:
Mean of the two levels:
Mean of the
active sentences
Mean of the
passive sentences
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Post-hoc Comparisons
• Maximal model for the aphasia data was:
• model.Maximal <- lmer(RT ~
1 + SentenceType * SubjectType +
(1 + SentenceType|Subject) +
(1 + SubjectType|Item),
data = aphasia)
• This didn’t converge:
• boundary (singular) fit: see ?isSingular
Probably
overparameterized
Post-hoc Comparisons
• Maximal model for the aphasia data was:
• model.Maximal <- lmer(RT ~
1 + SentenceType * SubjectType +
(1 + SentenceType|Subject) +
(1 + SubjectType|Item),
data = aphasia)
• So, let’s simplify:
• model2<- lmer(RT ~
1 + SentenceType * SubjectType +
(1 + SentenceType|Subject) +
(1|Item),
data = aphasia)
• Doesn’t seem to harm model fit—p > .20
• anova(model.Maximal, model2)
Post-hoc Comparisons
Intercept: RT for healthy
controls, active sentences
Significant RT difference for
passive sentences (among
healthy controls)
Not a significant RT
difference for aphasics
(among active
sentences)
Significant special effect of
aphasia + passive sentence
• With treatment coding, we get
estimates of simple-effects:
Post-hoc Comparisons
• The estimates from a model are enough to fully
describe differences among conditions
• With simple effects:
ACTIVE,
CONTROL
RT ≈ 1716 ms
PASSIVE,
CONTROL
RT ≈ 2269 ms
SubjectType
SentenceType
Active Passive
Aphasia
Control
Passive
simple
effect
+553 ms
Post-hoc Comparisons
• The estimates from a model are enough to fully
describe differences among conditions
• With simple effects:
ACTIVE,
APHASIA
RT ≈ 1801 ms
ACTIVE,
CONTROL
RT ≈ 1716 ms
PASSIVE,
CONTROL
RT ≈ 2269 ms
SubjectType
SentenceType
Active Passive
Aphasia
Control
Passive
simple
effect
+553 ms
Aphasia
simple
effect
+85
ms
Post-hoc Comparisons
• The estimates from a model are enough to fully
describe differences among conditions
• With simple effects:
ACTIVE,
APHASIA
RT ≈ 1801 ms
PASSIVE,
APHASIA
RT ≈ 2542 ms
ACTIVE,
CONTROL
RT ≈ 1716 ms
PASSIVE,
CONTROL
RT ≈ 2269 ms
SubjectType
SentenceType
Active Passive
Aphasia
Control
Passive
simple
effect
+553 ms
Aphasia
simple
effect
+85
ms
+553 ms
+85
ms
Interaction
effect
+188
m
s
Post-hoc Comparisons
• But, sometimes we want to compare individual
combinations (e.g., people w/ aphasia seeing
active vs passive sentences)
• i.e., individual cells
ACTIVE,
APHASIA
RT ≈ 1801 ms
PASSIVE,
APHASIA
RT ≈ 2542 ms
ACTIVE,
CONTROL
RT ≈ 1716 ms
PASSIVE,
CONTROL
RT ≈ 2269 ms
SubjectType
SentenceType
Active Passive
Aphasia
Control
?
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Post-hoc Comparisons: Tukey Test
• But, sometimes we want to compare individual
combinations (e.g., people w/ aphasia seeing
active vs passive sentences)
• i.e., individual cells
• emmeans(Model.Maximal,pairwise~SentenceType*SubjectType)
• Requires emmeans package to be loaded
• library(emmeans)
• Uses Tukey
test to correct
for multiple
comparisons
so overall
α still = .05
• Which two cells don’t significantly differ?
Name of the model not the
original dataframe
Comparisons of each pair of cells
The independent variables
(for now, all of them)
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Estimated Marginal Means
• emmeans also returns estimated means and std.
errors for each cell of the design
• EMMs represent what the means of the different
groups would look like if they didn’t differ in other
(fixed or random) variables
Estimated Marginal Means
• As an example, let’s add SentenceLength as a
covariate
• Task was to read the sentence & judge whether it matches a
picture, so length of sentence would plausibly affect time
needed to do this
• Not perfectly balanced across conditions
Estimated Marginal Means
• New model & estimated marginal means:
• model.Length <- lmer(RT ~ 1 + SentenceType *
SubjectType + SentenceLength + (SentenceType|Subject)
+ (1|Item), data=aphasia)
• emmeans(model.Length,
pairwise~SentenceType*SubjectType + SentenceLength)
Our raw data (where the
average sentence length
differs slightly across
conditions)
What we’d
expect if the
sentences were
all of equal
length
Estimated Marginal Means
• EMMs are a hypothetical
• What the means of the different
groups would look like if they didn’t
differ in this covariate
• Like unweighted coding in the
case of missingness
• Based on our statistical model, so if our
model is wrong (e.g., we picked the wrong
covariate), the adjusted means will be too
• Unlike the raw sample means
• Need to use some caution in interpreting them
• Be clear what you are reporting
Estimated Marginal Means
• Also possible to test whether each of these
estimated cell means significantly differs from 0
• ls_means(model.Length)
• Silly in case of RTs, but could be relevant for some other
DVs (e.g., preference)
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab
Comparing Marginal Means
• emmeans can also test marginal means:
• emmeans(model2,pairwise~SubjectType)
• Effect of one variable averaging over the other
• e.g., aphasic participants (averaging over all sentence
types) vs. controls (averaging over all sentence types)
• These are what main effects are testing
Now, include just one variable (for which
we want marginal means)
Week 8.1: Post-Hoc Comparison
! Unbalanced Factors
! Weighted Coding
! Unweighted Coding
! Post-Hoc Comparisons
! Tukey Test
! Estimated Marginal Means
! Comparing Marginal Means
! Lab

Weitere ähnliche Inhalte

Was ist angesagt?

Was ist angesagt? (20)

Exploratory Data Analysis using Python
Exploratory Data Analysis using PythonExploratory Data Analysis using Python
Exploratory Data Analysis using Python
 
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEUnified Approach to Interpret Machine Learning Model: SHAP + LIME
Unified Approach to Interpret Machine Learning Model: SHAP + LIME
 
Ml3 logistic regression-and_classification_error_metrics
Ml3 logistic regression-and_classification_error_metricsMl3 logistic regression-and_classification_error_metrics
Ml3 logistic regression-and_classification_error_metrics
 
Confusion matrix, accuracy, precision, recall, f score
Confusion matrix, accuracy, precision, recall, f scoreConfusion matrix, accuracy, precision, recall, f score
Confusion matrix, accuracy, precision, recall, f score
 
Chap05 c
Chap05 cChap05 c
Chap05 c
 
Logistic regression
Logistic regressionLogistic regression
Logistic regression
 
Data cleansing
Data cleansingData cleansing
Data cleansing
 
Exploratory data analysis
Exploratory data analysisExploratory data analysis
Exploratory data analysis
 
A Primer on Entity Resolution
A Primer on Entity ResolutionA Primer on Entity Resolution
A Primer on Entity Resolution
 
An overview of foundation models.pdf
An overview of foundation models.pdfAn overview of foundation models.pdf
An overview of foundation models.pdf
 
Correlation
CorrelationCorrelation
Correlation
 
Social network analysis part ii
Social network analysis part iiSocial network analysis part ii
Social network analysis part ii
 
Word representation: SVD, LSA, Word2Vec
Word representation: SVD, LSA, Word2VecWord representation: SVD, LSA, Word2Vec
Word representation: SVD, LSA, Word2Vec
 
Data Science - Part XI - Text Analytics
Data Science - Part XI - Text AnalyticsData Science - Part XI - Text Analytics
Data Science - Part XI - Text Analytics
 
Data preprocessing PPT
Data preprocessing PPTData preprocessing PPT
Data preprocessing PPT
 
Neural Language Generation Head to Toe
Neural Language Generation Head to Toe Neural Language Generation Head to Toe
Neural Language Generation Head to Toe
 
Word2 vec
Word2 vecWord2 vec
Word2 vec
 
Linear regression theory
Linear regression theoryLinear regression theory
Linear regression theory
 
Word embeddings
Word embeddingsWord embeddings
Word embeddings
 
Approaching (almost) Any NLP Problem
Approaching (almost) Any NLP ProblemApproaching (almost) Any NLP Problem
Approaching (almost) Any NLP Problem
 

Ähnlich wie Mixed Effects Models - Post-Hoc Comparisons

Thermal Expansion Lab ProjectThe data listed in Table 1 be
Thermal Expansion Lab ProjectThe data listed in Table 1 beThermal Expansion Lab ProjectThe data listed in Table 1 be
Thermal Expansion Lab ProjectThe data listed in Table 1 be
GrazynaBroyles24
 
Formulating a Hypothesis
Formulating a HypothesisFormulating a Hypothesis
Formulating a Hypothesis
bjkim0228
 
TEST #1Perform the following two-tailed hypothesis test, using a.docx
TEST #1Perform the following two-tailed hypothesis test, using a.docxTEST #1Perform the following two-tailed hypothesis test, using a.docx
TEST #1Perform the following two-tailed hypothesis test, using a.docx
mattinsonjanel
 

Ähnlich wie Mixed Effects Models - Post-Hoc Comparisons (20)

Topic 5 (multiple regression)
Topic 5 (multiple regression)Topic 5 (multiple regression)
Topic 5 (multiple regression)
 
NLP_KASHK:Evaluating Language Model
NLP_KASHK:Evaluating Language ModelNLP_KASHK:Evaluating Language Model
NLP_KASHK:Evaluating Language Model
 
LecccccccccccccProgrammingLecture-09.pdf
LecccccccccccccProgrammingLecture-09.pdfLecccccccccccccProgrammingLecture-09.pdf
LecccccccccccccProgrammingLecture-09.pdf
 
Mixed Effects Models - Simple and Main Effects
Mixed Effects Models - Simple and Main EffectsMixed Effects Models - Simple and Main Effects
Mixed Effects Models - Simple and Main Effects
 
Topic 5 (multiple regression)
Topic 5 (multiple regression)Topic 5 (multiple regression)
Topic 5 (multiple regression)
 
Topic 2 - More on Hypothesis Testing
Topic 2 - More on Hypothesis TestingTopic 2 - More on Hypothesis Testing
Topic 2 - More on Hypothesis Testing
 
probability.pptx
probability.pptxprobability.pptx
probability.pptx
 
Chapter37
Chapter37Chapter37
Chapter37
 
Inferential Statistics
Inferential StatisticsInferential Statistics
Inferential Statistics
 
regression.pptx
regression.pptxregression.pptx
regression.pptx
 
Mixed Effects Models - Missing Data
Mixed Effects Models - Missing DataMixed Effects Models - Missing Data
Mixed Effects Models - Missing Data
 
powerpoint 1-19.pdf
powerpoint 1-19.pdfpowerpoint 1-19.pdf
powerpoint 1-19.pdf
 
5954987.ppt
5954987.ppt5954987.ppt
5954987.ppt
 
Introduction to Machine Learning
Introduction to Machine LearningIntroduction to Machine Learning
Introduction to Machine Learning
 
Thermal Expansion Lab ProjectThe data listed in Table 1 be
Thermal Expansion Lab ProjectThe data listed in Table 1 beThermal Expansion Lab ProjectThe data listed in Table 1 be
Thermal Expansion Lab ProjectThe data listed in Table 1 be
 
Formulating a Hypothesis
Formulating a HypothesisFormulating a Hypothesis
Formulating a Hypothesis
 
Dimd_m_004 DL.pdf
Dimd_m_004 DL.pdfDimd_m_004 DL.pdf
Dimd_m_004 DL.pdf
 
TEST #1Perform the following two-tailed hypothesis test, using a.docx
TEST #1Perform the following two-tailed hypothesis test, using a.docxTEST #1Perform the following two-tailed hypothesis test, using a.docx
TEST #1Perform the following two-tailed hypothesis test, using a.docx
 
Spss
SpssSpss
Spss
 
Lab report walk through
Lab report walk throughLab report walk through
Lab report walk through
 

Mehr von Scott Fraundorf

Mehr von Scott Fraundorf (14)

Mixed Effects Models - Signal Detection Theory
Mixed Effects Models - Signal Detection TheoryMixed Effects Models - Signal Detection Theory
Mixed Effects Models - Signal Detection Theory
 
Mixed Effects Models - Power
Mixed Effects Models - PowerMixed Effects Models - Power
Mixed Effects Models - Power
 
Mixed Effects Models - Effect Size
Mixed Effects Models - Effect SizeMixed Effects Models - Effect Size
Mixed Effects Models - Effect Size
 
Mixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Growth Curve AnalysisMixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Growth Curve Analysis
 
Mixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Orthogonal ContrastsMixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Orthogonal Contrasts
 
Mixed Effects Models - Centering and Transformations
Mixed Effects Models - Centering and TransformationsMixed Effects Models - Centering and Transformations
Mixed Effects Models - Centering and Transformations
 
Mixed Effects Models - Crossed Random Effects
Mixed Effects Models - Crossed Random EffectsMixed Effects Models - Crossed Random Effects
Mixed Effects Models - Crossed Random Effects
 
Mixed Effects Models - Random Slopes
Mixed Effects Models - Random SlopesMixed Effects Models - Random Slopes
Mixed Effects Models - Random Slopes
 
Mixed Effects Models - Level-2 Variables
Mixed Effects Models - Level-2 VariablesMixed Effects Models - Level-2 Variables
Mixed Effects Models - Level-2 Variables
 
Mixed Effects Models - Fixed Effects
Mixed Effects Models - Fixed EffectsMixed Effects Models - Fixed Effects
Mixed Effects Models - Fixed Effects
 
Mixed Effects Models - Data Processing
Mixed Effects Models - Data ProcessingMixed Effects Models - Data Processing
Mixed Effects Models - Data Processing
 
Mixed Effects Models - Descriptive Statistics
Mixed Effects Models - Descriptive StatisticsMixed Effects Models - Descriptive Statistics
Mixed Effects Models - Descriptive Statistics
 
Mixed Effects Models - Introduction
Mixed Effects Models - IntroductionMixed Effects Models - Introduction
Mixed Effects Models - Introduction
 
Scott_Fraundorf_Resume
Scott_Fraundorf_ResumeScott_Fraundorf_Resume
Scott_Fraundorf_Resume
 

Kürzlich hochgeladen

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
QucHHunhnh
 

Kürzlich hochgeladen (20)

Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
 
ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701ComPTIA Overview | Comptia Security+ Book SY0-701
ComPTIA Overview | Comptia Security+ Book SY0-701
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.ICT role in 21st century education and it's challenges.
ICT role in 21st century education and it's challenges.
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
 

Mixed Effects Models - Post-Hoc Comparisons

  • 1. Course Business ! Midterm assignment due on Canvas next Monday at 1:30 PM ! Unsure if an article is suitable? Can run it by me ! Add-on packages to install for today: ! performance (may have gotten this last week) ! emmeans ! Lab materials on Canvas
  • 2. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 3. Unbalanced Factors • Sometimes, we may have differing numbers of observations per level • Possible reasons: • Some categories naturally more common • e.g., college majors • Categories may be equally common in the population, but we have sampling error • e.g., ended up 60% female participants, 40% male • Study was designed so that some conditions are more common • e.g., more “control” subjects than “intervention” subjects • We wanted equal numbers of observations, but lost some because of errors or exclusion criteria • e.g., data loss due to computer problems • Dropping subjects below a minimum level of performance
  • 4. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 5. Weighted Coding • “For the average student, does course size predict probability of graduation?” • Random sample of 200 Pitt undergrads • 5 are student athletes and 195 are not • How can we make the intercept reflect the “average student”? • We could try to apply effects coding to the StudentAthlete variable by centering around the mean and getting (0.5, -0.5), but…
  • 6. Weighted Coding • An intercept at 0 would no longer correspond to the overall mean • As a scale, this would be totally unbalanced • To fix balance, we need to assign a heavier weight to Athlete ATHLETE (5) NOT ATHLETE (195) .5 -.5 0 -.475 Zero is here But “not athlete” is actually far more common
  • 7. Weighted Coding • Change codes so the mean is 0 • c(.975, -.025) • contr.helmert.weighted() function in my psycholing package will calculate this ATHLETE (5) NOT ATHLETE (195) .975 -.025 0
  • 8. Weighted Coding • Weighted coding: Change the codes so that the mean is 0 again • Used when the imbalance reflects something real • Like Type II sums of squares • “For the average student, does course size predict graduation rates?” • Average student is not a student athlete, and our answer to the question about an “average student” should reflect this!
  • 9. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 10. Unweighted Coding • Last week we looked at aphasia.csv: • Response times (RT) in a sentence verification task • Effect of SubjectType (aphasia vs. control) • Effect of SentenceType (active vs. passive) • And their interaction
  • 11. Unweighted Coding • Oops! Our experiment loaded up the wrong image for one of our Passive sentences (“Groceries”) • It may have been sabotaged • UsableItem column is No for this item • First, can we remove this from our data? • Some possibilities: • aphasia %>% filter(UsableItem == 'Yes') -> aphasia • aphasia %>% filter(UsableItem != 'No') -> aphasia2 • etc.
  • 12. Unweighted Coding • Oops! Our experiment loaded up the wrong image for one of our Passive sentences (“Groceries”) • Now, there’s an imbalance, but it’s an accident and not meaningful • In fact, we’d like to get rid of it!
  • 13. Unweighted Coding • Oops! Our experiment loaded up the wrong image for one of our Passive sentences (“Groceries”) • Now, there’s an imbalance, but it’s an accident and not meaningful • In fact, we’d like to get rid of it! • Retain the (-0.5, 0.5) codes • Weights the two conditions equally—because the imbalance isn’t meaningful • Like Type III sums of squares • Probably what you want for factorial experiments
  • 14. Unbalanced Factors: Summary • Weighted coding: Change the codes so that the mean is 0 • Use when the imbalance reflects something real • Can be done with contr.helmert.weighted() • Unweighted coding: Keep the codes as -0.5 and 0.5 • Use when the imbalance is an accident that we want to eliminate • With balanced factors, these are identical Mean across each individual: Mean of the two levels: Mean of the active sentences Mean of the passive sentences
  • 15. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 16. Post-hoc Comparisons • Maximal model for the aphasia data was: • model.Maximal <- lmer(RT ~ 1 + SentenceType * SubjectType + (1 + SentenceType|Subject) + (1 + SubjectType|Item), data = aphasia) • This didn’t converge: • boundary (singular) fit: see ?isSingular Probably overparameterized
  • 17. Post-hoc Comparisons • Maximal model for the aphasia data was: • model.Maximal <- lmer(RT ~ 1 + SentenceType * SubjectType + (1 + SentenceType|Subject) + (1 + SubjectType|Item), data = aphasia) • So, let’s simplify: • model2<- lmer(RT ~ 1 + SentenceType * SubjectType + (1 + SentenceType|Subject) + (1|Item), data = aphasia) • Doesn’t seem to harm model fit—p > .20 • anova(model.Maximal, model2)
  • 18. Post-hoc Comparisons Intercept: RT for healthy controls, active sentences Significant RT difference for passive sentences (among healthy controls) Not a significant RT difference for aphasics (among active sentences) Significant special effect of aphasia + passive sentence • With treatment coding, we get estimates of simple-effects:
  • 19. Post-hoc Comparisons • The estimates from a model are enough to fully describe differences among conditions • With simple effects: ACTIVE, CONTROL RT ≈ 1716 ms PASSIVE, CONTROL RT ≈ 2269 ms SubjectType SentenceType Active Passive Aphasia Control Passive simple effect +553 ms
  • 20. Post-hoc Comparisons • The estimates from a model are enough to fully describe differences among conditions • With simple effects: ACTIVE, APHASIA RT ≈ 1801 ms ACTIVE, CONTROL RT ≈ 1716 ms PASSIVE, CONTROL RT ≈ 2269 ms SubjectType SentenceType Active Passive Aphasia Control Passive simple effect +553 ms Aphasia simple effect +85 ms
  • 21. Post-hoc Comparisons • The estimates from a model are enough to fully describe differences among conditions • With simple effects: ACTIVE, APHASIA RT ≈ 1801 ms PASSIVE, APHASIA RT ≈ 2542 ms ACTIVE, CONTROL RT ≈ 1716 ms PASSIVE, CONTROL RT ≈ 2269 ms SubjectType SentenceType Active Passive Aphasia Control Passive simple effect +553 ms Aphasia simple effect +85 ms +553 ms +85 ms Interaction effect +188 m s
  • 22. Post-hoc Comparisons • But, sometimes we want to compare individual combinations (e.g., people w/ aphasia seeing active vs passive sentences) • i.e., individual cells ACTIVE, APHASIA RT ≈ 1801 ms PASSIVE, APHASIA RT ≈ 2542 ms ACTIVE, CONTROL RT ≈ 1716 ms PASSIVE, CONTROL RT ≈ 2269 ms SubjectType SentenceType Active Passive Aphasia Control ?
  • 23. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 24. Post-hoc Comparisons: Tukey Test • But, sometimes we want to compare individual combinations (e.g., people w/ aphasia seeing active vs passive sentences) • i.e., individual cells • emmeans(Model.Maximal,pairwise~SentenceType*SubjectType) • Requires emmeans package to be loaded • library(emmeans) • Uses Tukey test to correct for multiple comparisons so overall α still = .05 • Which two cells don’t significantly differ? Name of the model not the original dataframe Comparisons of each pair of cells The independent variables (for now, all of them)
  • 25. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 26. Estimated Marginal Means • emmeans also returns estimated means and std. errors for each cell of the design • EMMs represent what the means of the different groups would look like if they didn’t differ in other (fixed or random) variables
  • 27. Estimated Marginal Means • As an example, let’s add SentenceLength as a covariate • Task was to read the sentence & judge whether it matches a picture, so length of sentence would plausibly affect time needed to do this • Not perfectly balanced across conditions
  • 28. Estimated Marginal Means • New model & estimated marginal means: • model.Length <- lmer(RT ~ 1 + SentenceType * SubjectType + SentenceLength + (SentenceType|Subject) + (1|Item), data=aphasia) • emmeans(model.Length, pairwise~SentenceType*SubjectType + SentenceLength) Our raw data (where the average sentence length differs slightly across conditions) What we’d expect if the sentences were all of equal length
  • 29. Estimated Marginal Means • EMMs are a hypothetical • What the means of the different groups would look like if they didn’t differ in this covariate • Like unweighted coding in the case of missingness • Based on our statistical model, so if our model is wrong (e.g., we picked the wrong covariate), the adjusted means will be too • Unlike the raw sample means • Need to use some caution in interpreting them • Be clear what you are reporting
  • 30. Estimated Marginal Means • Also possible to test whether each of these estimated cell means significantly differs from 0 • ls_means(model.Length) • Silly in case of RTs, but could be relevant for some other DVs (e.g., preference)
  • 31. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab
  • 32. Comparing Marginal Means • emmeans can also test marginal means: • emmeans(model2,pairwise~SubjectType) • Effect of one variable averaging over the other • e.g., aphasic participants (averaging over all sentence types) vs. controls (averaging over all sentence types) • These are what main effects are testing Now, include just one variable (for which we want marginal means)
  • 33. Week 8.1: Post-Hoc Comparison ! Unbalanced Factors ! Weighted Coding ! Unweighted Coding ! Post-Hoc Comparisons ! Tukey Test ! Estimated Marginal Means ! Comparing Marginal Means ! Lab