This document describes using a split-plot design for a wind tunnel experiment to optimize the aerodynamic performance of a racecar. The experiment had 4 factors, with 2 that were hard-to-change (front and rear ride heights) and 2 that were easy-to-change (yaw angle and grill cover). A split-plot design was used to reduce the total time needed, collecting data from 45 runs over 10 hours instead of 36 runs over 30 hours. The analysis accounted for two sources of error and showed several significant factors for improving downforce and reducing drag.
2. Design of Experiments (DOE)
Basic idea is to simultaneously study the impact of
several factors on the response(s) of interest
Sequential Approach from screening to optimization
Interactions among factors are important
Surfaces can be linear or quadratic
3. Guidelines for DOE
State the problem and clearly define the objectives of
the study
Choose the factors to be studied and their levels
Determine the responses and how to measure them
Determine the appropriate experimental design
4. Guidelines for DOE
Execute the design
Statistically analyze the data
Verify results using confirmatory runs
Make recommendations
5. Racecar Experiment Introduction
Wind tunnel experiment to characterize aerodynamic
performance and develop improvements
Typical factors are vehicle attitude, ride height, yaw
angle, and vehicle geometry
Responses include lift (downforce), drag, and lift to
drag ratio
Goal is often to minimize drag while maintaining a
specified level of downforce
6. Racecar Experiment Introduction
The experiment was performed in the wind tunnel at
Langley Air-force Base (also used extensively for
aircrafts)
Four factors were considered
• Front end height
• Rear end height
• Yaw angle
• Grill cover
8. Racecar Experiment Introduction
They were concerned about possible curvature in the
response
A previous experiment had been performed (36 runs)
• Replicated 24 design with 4 center points
Problems:
• When either the front end or rear end height level is
changed, then the other end also changes (needs adjusting)
• Only way to change height is by shutting down the wind
tunnel (30-45 minutes to get back to equilibrium )
• Other factors have easy to change levels (3 minutes)
• (30 hours to complete the experiment)
9. Racecar Experiment Introduction
Can we use some other advanced DOE tools to
reduce the total time of the experiment?
Can we collect more overall data using an advanced
tool?
How does using the advanced tool affect the analysis
of the data?
Answer to all 3 is YES!!!
10. Transition
We will come back to the racecar example later
We need to cover some general information about the
advanced design that we will use
11. Treatments
Factors have different levels used in the experiment
Treatments: the combination of the factor levels used
in the experiment
Example: Temperature 100, 200 Material A, B, C
• Treatments are
• 100, A 200, A 100, B 200, B 100,C 200,C
12. DOE Basics
Three Principles of DOE
• Randomization—randomly assign the treatments to the
units of interest
• Replication--assign the treatments to more than one unit
• Local Control—control for known sources of variation
through blocking
We will focus on all three of these in some fashion
13. DOE Units
There are two types of units in a DOE
• Experimental Unit: the smallest unit to which a treatment
can be applied independently of all other treatments
• Observational Unit: the unit we take measurements on
Most times these units are the same
It is important that we understand that experimental
error comes from variation in running the same
treatment on more than one experimental unit
14. DOE Units
Consider an example of spraying pesticides on trees:
two brands and two amounts with 2 replicates
15. DOE Units
We spray the trees, then take several leaves from
each tree and count the number of bugs
Thus we have two types of units
• Experimental Unit: TREE
Replicate
• Observational Unit: LEAF
Repeat
16. Randomization
Let’s talk about the randomization principle
• Randomization is done to “average” out the effects of lurking
variables
• A fundamental philosophy---textbooks assume for almost all
techniques that the design is randomized
• Most software for DOE automatically randomizes the runs
• Unfortunately, random run order often results in changes to
factor settings after each run for many of the factors in the DOE
• What should be done then if, one or more of these factors
cannot be easily or quickly changed?
17. Types of Factors
Actually it is common in industry to have one or more
factors that are not easily randomized
Examples include temperatures, pressures, prototype
factors and change over factors
These factors are often called Hard-to-Change (HTC)
factors while the rest of the factors in the design are
referred to as Easy-to-Change (ETC) factors
Many people ignore the impact of the HTC factors
18. Split-Plot Design
Split-plot design and it originated in agriculture
Experimental Units:
Irrigation is Column
Fertilizer is Plot
F4F3F2F1F4F3
F1F2F4F2F3F2
F2F1F3F4F1F4
F3F4F1F3F2F1
I3I1I2I3I2I1
20. Industrial Example 2
Baking a cake
• Oven Temperature, Egg Powder, Flour, Sugar
• How do we conduct the experiment?
• Mix up a cake with some level of Egg Powder, Flour, Sugar
then bake it at a certain temperature
• Notice this will take a long time to carry out
Another idea: fix the temperature at a level, then bake
all the cakes involving Egg Powder, Flour and Sugar
Then change the temperature and again bake all the
cakes
21. Baking a Cake
This looks something like
The experimental unit for Temperature is the Oven
The experimental unit for other factors is a Cake
Cakes are observational units for Temperature
Low
Temp
High
Temp
22. Baking a Cake
We average the observational units (repeats) to get
the response for the experimental unit
Therefore to get the response for Temperature at High,
we would average the 8 cakes involving the different
combinations of Egg Powder, Flour, Sugar
Hence, we only have 2 observations for Temperature:
one at High and one at Low
To get an estimate of error, we would need to run the
Temperature at High twice and at Low twice
23. Baking a Cake
In addition to two different experimental units, there is
two randomizations
We randomly assign the Temperature to the oven,
then randomly assign the cakes within the oven
Therefore, there are two error terms
• One for testing Temperature
• One for testing Egg Powder, Flour, Sugar and all the
interactions
25. Blocking
Why is the design is not a block design?
• A block is a collection of similar experimental units
• Temperature is a factor applied to the experimental units
• There is interest in the interactions with Temperature
• The resulting design has two errors from two kinds of
experimental units
However, we will be able to take advantage of the fact
that it looks like a block in order to construct the design
26. Baking a Cake
In Minitab, we construct the subplot design in “blocks”
to get the right structure of the design
Stat > DOE > Factorial > Create Factorial Design
28. Baking a Cake
It is common to rename blocks --- Rep
To create the Temperature column
• Calc > Make Patterned Data > Simple Set of Numbers
29. Baking a Cake
The analysis involves a small trick to get the right error
for Temperature since by default Minitab only has one
error term
Consider a simple One-Way ANOVA case
The error is a nested term
We use this knowledge to trick Minitab to get the
correct error for Temperature
30. Baking a Cake
Ignoring the two error terms
• Use smaller error for Temp (Type I error)
• Use larger error for other terms (Type II error)
NotF*SNotF*S
NotE*SNotE*S
Signif.E*FNotE*F
NotT*SNotT*S
Signif.T*FSignif.T*F
Signif.T*ENotT*E
NotSugarNotSugar
Signif.FlourSignif.Flour
Signif.EggSignif.Egg
NotTempSignif.Temp
Correct2 ErrorsIncorrect1 Error
31. Back to the Racecar Experiment
CL-rear
CL-frontCD
Grille with tape
Yaw
32. Racecar Experiment
Recall that we have 4 factors
• 2 HTC factors (front and rear heights)
• 2 ETC factors (yaw and grill cover)
So a replicated 22 gives 8 runs for the heights (1 center
point in the heights was included for a total of 9 runs)
This means only changing the heights 9 times
In each HTC run, 5 ETC combinations are carried out
(22 plus one center run in Yaw and Grill Cover)
33. Racecar Experiment
1. randomly select the ride height factor levels of the car
2. at the factor levels from step 1, running all
combinations of yaw and grille tape in random order
3. randomly selecting another ride height combination
4. again running all combinations of yaw angle and grille
tape in random order
5. repeat the steps until all ride height combinations
have been tested
35. Racecar Experiment
This leads to a total of 45 runs
But it only took about 10 hours to complete
So more data, in about 1/3 of the time
Wind tunnel time is very expensive so this was a huge
savings
Analysis is more complicated than Cake example
36. Racecar Experiment
We have two error terms
We also have center points
But since we use ANOVA for the analysis, Minitab will
think there are 3 distinct levels instead of 2 levels with
center points
We need to create a bunch of terms in the calculator
(interactions and center points)
37. Racecar Analysis
The analysis needs to be done in two stages
• First is to do the HTC factor analysis using the means of the
5 ETC combinations from each of the 9 runs of the HTC
factors
• This gives the correct tests for the HTC factors
• Second is to use a categorical factor with 5 levels
(representing the 5 combinations of the HTC factors) to
account for the correct SS and df from the HTC factors
• Doing this gives along with the nested trick from earlier
gives all the correct tests for the other terms
39. Summary
DOE is a great tool for learning about and optimizing
products/processes
Many applications of DOE involve HTC factors
Using a split-plot design saves time and money
Analysis is more complicated
MINITAB-16 will have 2-level split-plot designs
40. References
Montgomery DC. Design and Analysis of Experiments,
6th ed., John J. Wiley & Sons, New York, 2004.
Kowalski, S. M.; Parker, P. A.; and Vining, G. G.
(2007). “Tutorial on Split-Plot Experiments”. Quality
Engineering 19, pp. 1-16.
Simpson, J. R.; Kowalski, S. M.; and Landman, D.
(2004). “Experimentation With Randomization
Restrictions: Targeting Practical Implementation”.
Quality and Reliability Engineering International
20(5), pp. 481-495.