3. Dependent Variables
The variables that are measured by the
experimenter
They are “dependent” on the independent
variables (if there is a relationship between the IV
and DV as the hypothesis predicts).
Consider our class experiment
Conceptual level: Memory
Operational level: Recall test
Present list of words, participants make a
judgment for each word
15 sec. of filler (counting backwards by 3’s)
Measure the accuracy of recall
4. Choosing your dependent variable
How to measure your your construct:
Can the participant provide self-report?
• Introspection – specially trained observers of their own thought
processes, method fell out of favor in early 1900’s
• Rating scales – strongly agree-agree-undecided-disagree-
strongly disagree
Is the dependent variable directly observable?
• Choice/decision (sometimes timed)
Is the dependent variable indirectly observable?
• Physiological measures (e.g. GSR, heart rate)
• Behavioral measures (e.g. speed, accuracy)
7. Measuring your dependent variables
Scales of measurement - the correspondence
between the numbers representing the
properties that we’re measuring
The scale that you use will (partially) determine what
kinds of statistical analyses you can perform
9. Scales of measurement
Label and categorize observations,
Do not make any quantitative distinctions between
observations.
Example:
• Eye color:
blue, green, brown, hazel
Nominal Scale: Consists of a set of categories that have
different names.
11. Scales of measurement
Rank observations in terms of size or magnitude.
Example:
• T-shirt size:
Small, Med, Lrg, XL, XXL
Ordinal Scale: Consists of a set of categories that are
organized in an ordered sequence.
12. Scales of measurement
Categorical variables
Nominal scale
Ordinal scale
Quantitative variables
Interval scale
Ratio scale
Categories
Categories with order
13. Scales of measurement
Interval Scale: Consists of ordered categories where all of the
categories are intervals of exactly the same size.
Example: Fahrenheit temperature scale
20º
40º “Not Twice as hot”
With an interval scale, equal differences between numbers on
the scale reflect equal differences in magnitude.
However, Ratios of magnitudes are not meaningful.
20º 40º The amount of temperature
increase is the same
60º 80º
20º increase
20º increase
14. Scales of measurement
Categorical variables
Nominal scale
Ordinal scale
Quantitative variables
Interval scale
Ratio scale
Categories
Categories with order
Ordered Categories of
same size
15. Scales of measurement
Ratios of numbers DO reflect ratios of magnitude.
It is easy to get ratio and interval scales confused
• Example: Measuring your height with playing cards
Ratio scale: An interval scale with the additional feature
of an absolute zero point.
18. Scales of measurement
Interval scale
Ratio scale
8 cards high 5 cards high
0 cards high
means ‘no
height’
0 cards high
means ‘as tall as
the table’
19. Scales of measurement
Categorical variables
Nominal scale
Ordinal scale
Quantitative variables
Interval scale
Ratio scale
Categories
Categories with order
Ordered Categories of
same size
Ordered Categories of same
size with zero point
• Given a choice, usually prefer highest level of
measurement possible
“Best” Scale?
20. Measuring your dependent variables
Scales of measurement
Errors in measurement
Reliability & Validity
21. Example: Measuring intelligence?
Measuring the true score
How do we measure the
construct?
How good is our
measure?
How does it compare to
other measures of the
construct?
Is it a self-consistent
measure?
22. Errors in measurement
In search of the “true score”
Reliability
• Do you get the same value with multiple measurements?
Validity
• Does your measure really measure the construct?
• Is there bias in our measurement? (systematic error)
24. Dartboard analogy
Bull’s eye = the “true score”
Reliability = consistency
Validity = measuring what is intended
reliable
valid
reliable
invalid
unreliable
invalid
25. Reliability
True score + measurement error
A reliable measure will have a small amount of
error
Multiple “kinds” of reliability
26. Reliability
Test-restest reliability
Test the same participants more than once
• Measurement from the same person at two
different times
• Should be consistent across different
administrations
Reliable Unreliable
27. Reliability
Internal consistency reliability
Multiple items testing the same construct
Extent to which scores on the items of a measure
correlate with each other
• Cronbach’s alpha (α)
• Split-half reliability
• Correlation of score on one half of the measure with
the other half (randomly determined)
28. Reliability
Inter-rater reliability
At least 2 raters observe behavior
Extent to which raters agree in their observations
• Are the raters consistent?
Requires some training in judgment
5:00
4:56
29. Validity
Does your measure really measure what it is
supposed to measure?
There are many “kinds” of validity
32. Face Validity
At the surface level, does it look as if the
measure is testing the construct?
“This guy seems smart to me,
and
he got a high score on my IQ measure.”
33. Construct Validity
Usually requires multiple studies, a large body
of evidence that supports the claim that the
measure really tests the construct
34. Internal Validity
Did the change in the
DV result from the
changes in the IV or
does it come from
something else?
The precision of the results
35. Threats to internal validity
History – an event happens the experiment
Maturation – participants get older (and other
changes)
Selection – nonrandom selection may lead to biases
Mortality – participants drop out or can’t continue
Testing – being in the study actually influences how
the participants respond
36. External Validity
Are experiments “real life” behavioral situations,
or does the process of control put too much
limitation on the “way things really work?”
37. External Validity
Variable representativeness
Relevant variables for the behavior studied along
which the sample may vary
Subject representativeness
Characteristics of sample and target population
along these relevant variables
Setting representativeness
Ecological validity - are the properties of the
research setting similar to those outside the lab
39. Extraneous Variables
Control variables
Holding things constant - Controls for excessive random
variability
Random variables – may freely vary, to spread variability
equally across all experimental conditions
Randomization
• A procedure that assures that each level of an extraneous variable has an
equal chance of occurring in all conditions of observation.
Confound variables
Variables that haven’t been accounted for (manipulated,
measured, randomized, controlled) that can impact changes in
the dependent variable(s)
Co-varys with both the dependent AND an independent
variable
40. Colors and words
Divide into two groups:
men
women
Instructions: Read aloud the COLOR that the words are
presented in. When done raise your hand.
Women first. Men please close your eyes.
Okay ready?
42. Okay, now it is the men’s turn.
Remember the instructions: Read aloud the
COLOR that the words are presented in. When
done raise your hand.
Okay ready?
44. Our results
So why the difference between the results for
men versus women?
Is this support for a theory that proposes:
“Women are good color identifiers, men are not”
Why or why not? Let’s look at the two lists.
46. What resulted in the performance
difference?
Our manipulated independent variable
(men vs. women)
The other variable match/mis-match?
Because the two variables are
perfectly correlated we can’t tell
This is the problem with confounds
Blue
Green
Red
Purple
Yellow
Green
Purple
Blue
Red
Yellow
Blue
Red
Green
Blue
Green
Red
Purple
Yellow
Green
Purple
Blue
Red
Yellow
Blue
Red
Green
IV
DV
Confound
Co-vary together
47. What DIDN’T result in the performance
difference?
Extraneous variables
Control
• # of words on the list
• The actual words that were printed
Random
• Age of the men and women in the groups
These are not confounds, because
they don’t co-vary with the IV
Blue
Green
Red
Purple
Yellow
Green
Purple
Blue
Red
Yellow
Blue
Red
Green
Blue
Green
Red
Purple
Yellow
Green
Purple
Blue
Red
Yellow
Blue
Red
Green
48. “Debugging your study”
Pilot studies
A trial run through
Don’t plan to publish these results, just try out the
methods
Manipulation checks
An attempt to directly measure whether the IV
variable really affects the DV.
Look for correlations with other measures of the
desired effects.