Inferential statistics are techniques that allow us to use these samples to make generalizations about the populations from which the samples were drawn. ... The methods of inferential statistics are (1) the estimation of parameter(s) and (2) testing of statistical hypotheses.
1. Inference Techniques
A brief summary of the more commonly used
tests of statistical significance that
researchers employ and then illustrate how
to do one such test
2. Statistical Techniques
There are two basic types of inference techniques that
researcher use.
1. Parametric Techniques make various kinds of assumptions
about the nature of the population from which the sample(s)
involved in the research study are drawn
2. Nonparametric Techniques, on the other hand, make
few(if any) assumptions about the nature of the population
from which the samples are taken.
3. Advantages and Disadvantages
Parametric Technique
Advantage: They are generally powerful than nonparametric
techniques and hence much more likely to reveal a true difference or
relationship if one really exists
Nonparametric Technique
Advantage: they are safer to use when a researcher cannot satisfy
the assumptions underlying the use of parametric techniques.
Disadvantage: a researcher cannot satisfy the assumptions they
require
4. PARAMETRIC
TECHNIQUES FOR
QUANTITATIVE DATA
• The t-test for means
• Analysis of Variance (ANOVA)
• Analysis of Covariance (ANCOVA)
• Multivariate Analysis of Variance
(MANOVA)
• The t-test for r
5. The t-test for means
The t-test is a parametric statistical test used to see whether a
difference between the means of two samples is significant.
There are two forms of this t-test
1. T-test for independent means is used to compare the means of two
different, or independent groups
Degree of Freedom refers to the number of scores in a frequency
distribution that are “free to vary” that is, they are not fixed. They are
calculated in an independent samples t-test by subtracting 2 from the
total number of values in both groups
6. The t-test for means
2. T-test for correlated means is used to compare the mean
scores of the same group before and after a treatment of score
is given, to see if any observed gain is significant, or when the
research design involves two matched groups. It is also used
when the same subjects receive two different treatments in a
study
7. Analysis of Variance (ANOVA)
It is used when researchers desire to find out whether there are
significant differences between the means of more than two groups. It is
actually a more general form of the t-test that is appropriate to use with
three or more groups. ( It can also be used with 2 groups )
Variation both within and between each of the groups is analyzed
statistically, yielding what is known as an F value.
When only 2 groups are being compared, the F test is sufficient to
tell the researcher whether significance has been achieved
When more than 2 groups are being compared, the F test will not, by
itself, tell us which of the means are different
8. Analysis of Covariance (ANCOVA)
is a variation of ANOVA used when, for example, groups are given a
pretest related in some way to the dependent variable and their mean
score on this pretest are found to differ.
It enables researcher to adjust the posttest mean scores on the
dependent variable for each group to compensate for the initial
differences between the groups on the pretest
the pretest is called the covariate.
Like ANOVA, ANCOVA produces an F value, which is then looked up
in a statistical table to determine whether it is statistically significant.
9. Multivariate Analysis of Variance (MANOVA)
Differs from ANOVA in only one respect: It incorporates two
or more dependent variables in the same analysis, thus
permitting a more powerful test of differences among means.
It is justified only when the researcher has reason to believe
correlations exist among the dependent variable
Similarly MANCOVA ( multivariate analysis of covariance )
extends ANCOVA to include to or more dependent variables in the
same analysis
The specific value that is calculated is Wilk’s lambda, a
number analogous to F in analysis of variance
10. The t-Test for r
is used to see whether a correlation coefficient
calculated on sample data is significant-that is, whether it
represents a non-zero correlation in the population from
which the sample was drawn.
It is similar to the t-test for means, except that here the
statistic being dealt with is a correlation coefficient (r) rather
than a difference between means.
12. The Mann-Whitney U test
is a nonparametric alternative to the t-test used when a
researcher wished to analyze ranked data. The researcher
intermingles the scores of the two groups and then ranks them
as if they were all from just one group
If the parent populations are identical, then the sum of the
pooled rankins for each group should be about the same.
If the summed ranks are markedly different, on the other
hand, then this difference is likely to be statistically significant
13. The Kruskal-Wallis One-Way Analysis of Variance
Is used when researchers have more than two
independent samples to compare.
Quite similar to the Mann-Whitney U test
The sums of the ranks added together for each of the
separate groups are then compared.
Analysis produces a value (H) while the Mann—
Whitney has a value of (U) whose probability of occurrence is
checked by the researcher in the appropriate statistical test.
14. The Sign Test
is used when a researcher wants to analyze two related
samples. Related samples are connected in some way.
This test is very easy to use. The researcher wants to
find out if the grous do not differ significantly, then the totals
for the two groups should be about equal.
If there is a marked difference in scoring, the difference
may be statistically significant.
15. The Friedman Two-Way Analysis of Variance
is used if there are more than two related groups involved
17. T-Test for Proportions
the most commonly used parametric test for analyzing
categorical data
Two forms:
1. t-test for independent proportions
2. T-test for correlated proportions
19. The Chi-Square Test
is used to analyze data that are reported in categories. It is
based on a comparison between expected frequencies and actual,
obtained frequencies.
If the obtained frequencies are similar to the expected
frequencies, then the researchers conclude that the groups do not
differ.
If there are considerable differences between the expected
and obtained frequencies, on the other hand, then researchers
conclude that there is a significant difference in attitude between
the two groups
Contingency Coefficient – the final step in the chi square test
process is to calculate this coefficient symbolized by letter C
21. 1. Decrease sampling error by:
a. Increasing Sample size. An estimate of the necessary sample
size can be obtained by doing a statistical power analysis.
This requires an estimation of all values (except n=sample
size) used in calculating the statistic you plan to use and
solving for n.
b. Using reliable measures to decrease measurement errors.
22. 2. Controlling for extraneous variables, as these may obscure
the relationship being studied.
3. Increasing the strength of the treatment (if there is one),
perhaps by using a larger time period
4. Using a one-tailed test, when such is justifiable