Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Statr session 19 and 20
1. Learning Objectives
• Understand the differences between various
experimental designs and when to use them.
• Compute and interpret the results of a one-way
ANOVA.
• Compute and interpret the results of a random block
design.
• Compute and interpret the results of a two-way
ANOVA.
• Understand and interpret interactions between
variables.
• Know when and how to use multiple comparison
techniques.
2. Introduction to Design of Experiments
• Experimental Design
– A plan and a structure to test hypotheses in which
the researcher controls or manipulates one or more
variables.
3. Introduction to Design of Experiments
Independent Variable
• Treatment variable - one that the experimenter
controls or modifies in the experiment.
• Classification variable - a characteristic of the
experimental subjects that was present prior to the
experiment, and is not a result of the experimenter’s
manipulations or control.
• Levels or Classifications - the subcategories of the
independent variable used by the researcher in the
experimental design.
• Independent variables are also referred to as factors.
4. Independent Variable
• Manipulation of the independent variable
depends on the concept being studied
• Researcher studies the phenomenon under
conditions of varying aspects of the variable
5. Introduction to
Design of Experiments
• Dependent Variable
- the response to the different levels of the
independent variable
• Analysis of Variance (ANOVA) – a group of
statistical techniques used to analyze
experimental designs.
- ANOVA begins with notion that individual items
being studied are all the same
6. Three Types of Experimental Designs
• Completely Randomized Design – subjects are
assigned randomly to treatments; single
independent variable.
• Randomized Block Design – includes a blocking
variable; single independent variable.
• Factorial Experiments – two or more independent
variables are explored at the same time; every level
of each factor are studied under every level of all
other factors.
7. Completely Randomized Design
• The completely randomized design contains only
one independent variable with two or more
treatment levels.
• If two treatment levels of the independent variable
are present, the design is the same used to test the
difference in means of two independent
populations which uses the t test to analyze the
data.
9. Completely Randomized Design
• A technique has been developed that analyzes all
the sample means at one time and precludes the
buildup of error rate: ANOVA.
• A completely randomized design is analyzed by one
way analysis of variance (One-Way Anova).
10. One-Way ANOVA:
Procedural Overview
𝐻0 : 𝜇1 = 𝜇2 = 𝜇3 = … . = 𝜇 𝑘
𝐻 𝑎 : at least one of the means is different from others
𝑀𝑆𝐶
𝐹=
𝑀𝑆𝐸
If 𝐹 > 𝐹 𝐶 reject 𝐻0
If 𝐹 ≤ 𝐹 𝐶 do not reject 𝐻0
11. Analysis of Variance
• The null hypothesis states that the population
means for all treatment levels are equal.
• Even if one of the population means is different
from the other, the null hypothesis is rejected.
• Testing the hypothesis is done by portioning the
total variance of data into the following two
variances:
- Variance resulting from the treatment (columns)
- Error variance or that portion of the total variance
unexplained by the treatment
13. Analysis of Variance
• The total sum of square of variation is partitioned
into the sum of squares of treatment columns and
the sum of squares of error.
• ANOVA compares the relative sizes of the treatment
variation and the error variation.
• The error variation is unaccounted for variation and
can be viewed at the point as variation due to
individual differences in the groups.
• If a significant difference in treatment is present,
the treatment variation should be large relative to
the error variation.
14. One-Way ANOVA:
Computational Formulas
• ANOVA is used to determine statistically
whether the variance between the treatment
level means is greater than the variances
within levels (error variance)
• Assumptions underlying ANOVA
Normally distributed populations
Observations represent random samples from
the population
Variances of the population are equal
15. One-Way ANOVA:
Computational Formulas
ANOVA is computed with the three sums of
squares:
• Total – Total Sum of Squares (SST); a measure of
all variations in the dependent variable
• Treatment – Sum of Squares Columns (SSC);
measures the variations between treatments or
columns since independent variable levels are
present in columns
• Error – Sum of Squares of Error (SSE); yields the
variations within treatments (or columns)
19. One-Way ANOVA:
Computational Formulas
• Other items
□ MSC – Mean Squares Columns
□ MSE – Mean Squares Error
□ MST – Mean Squares Total
• F value – determined by dividing the
treatment variance (MSC) by the error
variance (MSE)
□ F value is a ratio of the treatment variance to
the error variance
21. Analysis of Variance for Valve Openings
Source of Variance
df
Between
Error
Total
3
20
23
SS
MS
0.23658 0.078860
0.15492 0.007746
0.39150
F
10.18
22. F Table
• F distribution table is in Table A7.
• Associated with every F table are two unique df
variables: degrees of freedom in the numerator,
and degrees of freedom in the denominator.
• Statistical computer software packages for
computing ANOVA usually give a probability for the
F value, which allows hypothesis testing decisions
for any values of alpha .
25. Multiple Comparison Tests
• ANOVA techniques useful in testing hypothesis
about differences of means in multiple groups.
• Advantage: Probability of committing a Type I error
is controlled.
• Multiple Comparison techniques are used to
identify which pairs of means are significantly
different given that the ANOVA test reveals overall
significance.
26. Multiple Comparison Tests
• Multiple comparisons are used when an overall
significant difference between groups has been
determined using the F value of the analysis of
variance
• Tukey’s honestly significant difference (HSD) test
requires equal sample sizes
Takes into consideration the number of treatment levels,
value of mean square error, and sample size
27. Multiple Comparison Tests
• Tukey’s Honestly Significant Difference (HSD) – also
known as the Tukey’s T method – examines the
absolute value of all differences between pairs of
means from treatment levels to determine if there
is a significant difference.
• Tukey-Kramer Procedure is used when sample sizes
are unequal.
28. Tukey’s Honestly Significant
Difference (HSD) Test
If comparison for a pair of means is greater than
HSD, then the means of the two treatment levels
are significantly different.
29. Demonstration Example Problem
A company has three manufacturing plants, and
company officials want to determine whether there is
a difference in the average age of workers at the three
locations. The following data are the ages of five
randomly selected workers at each plant. Perform a
one-way ANOVA to determine whether there is a
significant difference in the mean ages of the workers
at the three plants. Use α = 0.01 and note that the
sample sizes are equal.
30. Data from Demonstration Example
PLANT (Employee Age)
1
29
27
30
27
28
Group Means
nj
C=3
dfE = N - C = 12
2
32
33
31
34
30
3
25
24
24
25
26
28.2
5
32.0
5
24.8
5
MSE = 1.63
31. Tukey’s HSD test
• Since sample sizes are equal, Tukey’s HSD tests
can be used to compute multiple comparison tests
between groups
• To compute the HSD, the values of MSE, n and
q must be determined
33. Tukey’s HSD Test
for the Employee Age Data
All three comparisons are greater than 2.88. Thus the
mean ages between any and all pairs of plants are
significantly different.
35. Example: Mean Valve openings
produced by four operators
A valve manufacturing wants to test whether there are
any differences in the mean valve openings produced
by four different machine operators. The data follow.
36. Example: Mean Valve openings
produced by four operators
Operator
1
2
3
4
Sample Size
5
8
7
4
Mean
6.3180
6.2775
6.4886
6.2300
37. Example: Tukey-Kramer Results for
the Four Operators
Pair
1 and 2
Critical
Difference
.1405
|Actual
Differences|
.0405
1 and 3
.1443
.1706*
1 and 4
.1653
.0880
2 and 3
.1275
.2111*
2 and 4
.1509
.0475
3 and 4
.1545
.2586*
*denotes significant at =.05
38. Randomized Block Design
• Randomized block design - focuses on one
independent variable (treatment variable) of
interest.
• Includes a second variable (blocking variable) used
to control for confounding or concomitant
variables.
• A Blocking Variable can have an effect on the
outcome of the treatment being studied
• A blocking variable is a variable a researchers wants
to control but not a treatment variable of interest.
40. Examples: Blocking Variable
• In the study of growth patterns of varieties of seeds
for a given type of plant, different plots of ground
work as blocks.
• Machine number, worker, shift, day of the week etc.
• Gender, Age, Intelligence, Economic level of
subjects
• Brand, Supplier, Vehicle etc.
41. Randomized Block Design
• Repeated measures design - is a design in which
each block level is an individual item or person, and
that person or item is measured across all
treatments
• A special case of Randomized Block Design
42. Randomized Block Design
• The sum of squares in a completely randomized
design is
SST = SSC + SSE
• In a randomized block design, the sum of squares is
SST = SSC + SSR + SSE
• SSR (blocking effects) comes out of the SSE
Some error in variation in randomized design are
due to the blocking effects of the randomized
block design
43. Randomized Block Design Treatment
Effects: Procedural Overview
• The observed F value for treatments computed
using the randomized block design formula is tested
by comparing it to a table F value.
• If the observed F value is greater than the table
value, the null hypothesis is rejected for that alpha
value.
• If the F value for blocks is greater than the critical
F value, the null hypothesis that all block
population means are equal is rejected.
45. Randomized Block Design:
Computational Formulas
C
SSC = n ( X j X )
j =1
n
SSR = C ( X
i =1
n
n
i X )
2
df
df
SSE = ( X ij X i X i X )
j =1 i =1
n
n
SST = ( X ij X )
j =1 i =1
SSC
MSC =
C 1
SSR
MSR =
n 1
SSE
MSE =
N n C 1
MSC
F treatments = MSE
MSR
=
F blocks MSE
2
where: i
j
C
n
=
=
=
=
2
R
df
E
df
2
C
E
= C 1
= n 1
= C 1 n 1 = N n C 1
= N 1
block group (row)
a treatment level (column)
number of treatment levels (columns)
number of observations in each treatment level (number of blocks - rows)
X = individual observation
X = treatment (column) mean
X = block (row) mean
ij
j
i
X = grand mean
N = total number of observations
SSC
SSR
SSE
SST
=
=
=
=
sum of squares columns (treatment)
sum of squares rows (blocking)
sum of squares error
sum of squares total
46. Randomized Block Design:
Tread-Wear Example
As an example of the application of the randomized
block design, consider a tire company that developed a
new tire. The company conducted tread-wear tests on
the tire to determine whether there is a significant
difference in tread wear if the average speed with
which the automobile is driven varies. The company
set up an experiment in which the independent
variable was speed of automobile. There were three
treatment levels.
47. Randomized Block Design:
Tread-Wear Example
Speed
Supplier
Slow
Medium
Fast
Block
Means
( X )
i
1
4.5
3.1
3.77
2
n=5
3.7
3.4
3.9
2.8
3.37
3
3.5
4.1
3.0
3.53
4
3.2
3.5
2.6
3.10
5
3.9
4.8
3.4
4.03
3.54
4.16
2.98
3.56
Treatment
Means( X )
j
C=3
N = 15
X
51. Analysis of Variance
for the Tread-Wear Example
Source of Variance
Treatment
Block
Error
Total
SS
3.484
1.541
0.143
5.176
df
2
4
8
14
MS
F
1.742 97.45
0.38525 21.72
0.017875
54. Randomized Block Design:
Tread-Wear Example
• Because the observed value of F for treatment
(97.45) is greater than this critical F value, the null
hypothesis is rejected.
At least one of the population means of the
treatment levels is not the same as the others.
There is a significant difference in tread wear for
cars driven at different speeds
• The F value for treatment with the blocking was
97.45 and without the blocking was 12.44
By using the random block design, a much larger
observed F value was obtained.