2. Levels of Measurement
ratio scale: equal intervals between all values and zero;
enable us to show relationships between values
interval scale: magnitude or quantitative size; equal
intervals between all values but there is no true zero
point
ordinal scale: shows differences only in magnitude
(which is measured by rankings); unsure about equal
intervals but no zero point
nominal scale: classifies items into categories that have
no quantitative relationship to the other; provides least
amount of information; nothing about magnitude or
intervals
3. Selecting a Statistical Test
1. How many IV are there?
2. How many treatment conditions are
there?
3. Is the experiment run between- or within-
subjects?
4. Are the subjects matched?
5. What is the level of measurement of the
DV?
4. Chi-Square Test
nonparametric test: it does not assume that the
population has certain parameters (i.e. normal
distribution) or that variances in the two groups are
about equal to each other
compares the frequencies obtained with expected
population frequencies, to test the null hypothesis.
tested by a 2 x 2 contingency table
as chi squared is larger than the critical value, you
can reject the null hypothesis.
To reject the null hypothesis, at p<.05, the value we
obtained must exceed the critical value
5. Degrees of Freedom
Tell how many members of a set of data
could change value without changing the
value of a statistic we already know for those
data
Number of rows minus 1 times the number
of columns minus 1.
6. Cramer’s Coefficient phi
Phi is an estimate of the degree of association
between the 2 categorical variables tested by
chi squared; similar to r
*Cohen (1988) suggests the following criteria
for interpreting the size of phi: .10= small
degree of association; .30= medium degree
of association; .50= large degree of
association.
7. The T Test
T test indicates the probability of two data
sets being the same.
P=1: two sets are exactly the same
P=0: two sets are not the same
Statistical test that allows the significance of
difference between the means of two
samples to be determined.
8. Analysis of Variance (ANOVA)
Statistical procedure used to evaluate
differences among three or more treatment
means; divides all the variance in the data
into component parts and then
compares/evaluates them for statistical
significance.
9. Simplest ANOVAs
Within groups variabilty is the extent to which
subject scores differ from one another under the
same treatment group.
> error; explain the variability
Between groups variability is the extent to
which group performance differs from one
treatment condition to another.
> made up of error and effects of IV
10. Sources of Variability
individual differences
different scores
extraneous variables
experimental manipulation
treatment conditions
11. All aspects of error that produce variability in subjects data:
Individual differences
undetected mistakes in recording data
variations in testing condition
host of extraneous variables
12. One-way between-subjects analysis of variance
treatment groups must be independent
only one IV
samples must be randomly selected
normally distributed on the DV and the variances are
equal (homogeneous)
13. Graphing the results
line or bar graph to help summarize findings;
IV on horizontal axis, DV on vertical axis;
data points represent group means
14. Interpreting Results
Two types of follow up test:
1. post hoc tests: tests done after the overall
analysis indicates a significant difference.
2. priori comparisons: tests between specific
treatment groups that were anticipated or planned
before the experiment was conducted.
15. One way repeated measures ANOVA
Used to determine whether multiple groups
are different where the participants are the
same in each group. The groups are
sometimes called “related groups”
16. Two way ANOVA
treatment groups are independent from each
other and the observations are randomly
sampled; assume population from each
group is normally distributed on the DV.