SlideShare ist ein Scribd-Unternehmen logo
1 von 84
1



             Seminar Report
                 On
            Taguchi’s Methods


Submitted by:                        Submitted to:
Ishwar Chander (800982007)           Mr. Tarun Nanda
Pulkit Bajaj (800982019)             (Sr. Lecturer)
Hitesh Bansal (800982006)




Department of Mechanical Engineering
       Thapar University, Patiala
               (Deemed University)
2


                                CONTENTS
   Chapter           1
   1.1Introduction                                          4
   1.2 Definitions of quality                               5
      1.2.1 Traditional and Taguchi definition of Quality   6
   1.3Taguchi’s quality philosophy                          7
   1.4 Objective of Taguchi Methods                         9
   1.5 8-Steps in Taguchi Methodology                       9


   Chapter 2 (Loss Function)                                10
2.1 Taguchi Loss Function                                   10
2.2 Variation of Quadratic Loss function                    16

   Chapter          3 (Analysis of Variation)               18
3.1 Understanding Variation                                 18
3.2 What is ANOVA                                           18
3.2.1 No Way ANOVA                                          18
3.2. 1.1 Degree of Freedom                                  19
3.2.2 One Way ANOVA                                         23
3.2.3 Two Way ANOVA                                         29
3.3 Example of ANOVA                                        35
   Chapter 4             (Orthogonal Array)                 45
4.1 What is Array                                           45
4.2 History of Array                                        45
4.3 Introduction of Orthogonal Array                        46
4.3.1 Intersecting many factor- A case study                48
4.3.1.1 Example of Orthogonal Array                         49
3
4.3.2 A Full factorial Experiment                                          57
4.4 Steps in developing Orthogonal Array                                   59
4.4.1  Selection of factors and/or interactions to be evaluated            59
4.4.2  Selection of number of levels for the factors                       59
4.4.3 Selection of the appropriate OA                                      60
4.4.4 Assignment of factors and/or interactions to columns                 61
4.4.5 Conduct tests                                                        63
4.4.6 Analyze results                                                      64
4.4.7 Confirmation experiment                                              67
4.5 Example Experimental Procedure                                         67
4.6 Standard Orthogonal Array                                              71

Chapter 5 (Robust Designing)

5.1 What is robustness                                                     72
5.2 The Robustness Strategy uses five primary tools                        72
5.2.1P-Diagram                                                             73
5.2.2 Quality Measurement                                                  74
5.2.3 Signal To Noise Ratio                                                75
5.3 Steps in Robust Para design                                            76
5.4 Noise Factor                                                           77
5.5 OFF-LINE and ON-LINE Quality Control                                   78
5.5.1 OFF-LINE Quality Control                                             78
5.5.2 ON-LINE Quality Control                                              78
5.5.1.1 Product Design                                                     79
5.5.1.2 Process Design                                                     79
5.5.2.1 Product Quality Control method (On Line Quality Control Stage 1)   80
5.5.2.2 Customer Relations (On Line Quality Control Stage 2)               80

References                                                                 84
4


                               Preface
When Japan began its reconstruction efforts after World War II, Japan faced an acute
shortage of good quality raw material, high quality manufacturing equipment and skilled
engineers. The challenge was to produce high quality products and continue to improve
quality product under these circumstances. The task of developing a methodology to meet
the challenge was assigned to Dr. Genichi Taguchi, who at that time was manager in
charge of developing certain telecommunication products at the electrical communication
laboratories (ECL) of Nippon Telecom and Telegraph Company(NTT).Through his
research in the 1950’s and the early 1960’s. Dr Taguchi developed the foundation of
robust design and validated his basics philosophies by applying them in the development
of many products. In recognisation of this contribution, Dr Taguchi received the individual
DEMING AWARD in 1962, which is one of the highest recognition in the quality field.




                              CHAPTER 1
5
1.1                            Introduction
Genichi Taguchi attended Kiryu Technical College where he studied textile engineering.
From 1942 to 1945, he served in the Astronomical Department of the Navigation Institute
of the Imperial Japanese Navy. After that, he worked in the Ministry of Public Health and
Welfare and the Institute of Statistical Mathematics, Ministry of Education. While
working there, he was educated by Matosaburo Masuyama on the use of orthogonal arrays
and also on different experimental design techniques.
        In 1950, he began working at the newly formed Electrical Communications
Laboratory of the Nippon Telephone and Telegraph Company. He stayed there for more
than 12 years and was responsible for training engineers to be more effective with their
techniques. While he was there he consulted with many different Japanese companies and
also wrote his first book on orthogonal arrays.
        He served as a visiting Professor at the Indian Statistical Institute from 1954 to
1955. While he was there, Taguchi met Sir R.A. Fisher and Walter A. Shewhart. He
published the first edition of his two-volume book on Experimental design in 1958. He
made his first visit to the United States in 1962 where he was a visiting Professor at
Princeton University. In the same year, he was also awarded his PhD from Kyushu
University.
        He developed the concept of the Quality Loss Function in the 1970’s. He also
published the third and most current edition of his book on experimental designs. He
revisited the United States in 1980 and from then his methods spread and became more
widely used. Genichi Taguchi made many important contributions during his lifetime.
Some of his most important were probably to the field of quality control. However he did
make many important contributions to experimental design.
         Professor Genichi Taguchi was the director of the Japanese Academy of quality and
four times receipt of the Deming Prize. The term Taguchi Methods was coined in the
United States.
           Although SPC can assist the operator in the elimination of the special cause of
defects, thus bringing the process under control. But some thing is still needed: the
continuous improvement of manufacturing processes so that the production of robust
products can be assured. And this is where Taguchi comes in. He starts where SPC
(temporarily) finishes. He can help with the identification of common cause of variation,
the most difficult to determine and eliminate in process. He attempts to go even further: he
tries to make the process and the product robust against their effect (eliminate of the effect
rather then the cause) at the design stage; indeed, in dealing with uncontrollable (noise)
factors, there is no alternative. Even if the removal of the effect is impossible, he provides
a systematic procedure for controlling the noise (through tolerance design) at the minimum
cost.
          When Dr. Taguchi was first brought his ideas to America in 1980, he was already
well known in Japan for his contribution to quality engineering. His arrival in the U.S.
went virtually unnoticed, but by 1984 his ideas had generated so much interest that Ford
Motors Company sponsored the first supplier Symposium on Taguchi Methods.
6
1.2                                 Definitions of Quality
       “Fitness for use”
                                                                         Dr. Juran (1964)

       The leading promoter of the “zero defects” concept and author of Quality is Free
         (1979) defines quality as” Conformance to requirements.
                                                                        Philips Crosby
       Quality should be aimed at needs of consumer, present and future.
                                                                          Dr. Deming

       The totality of features and characteristics of a product or service that bear on its
         ability to satisfy given needs.
                                           The American Society for Quality Control (1983)

       The aggregate of properties of a product determining its ability to satisfy the needs
         it was built to satisfy.
                                                                (Russian Encyclopaedia)
       The totality of features and characteristics of a product and service that bear on its
        ability to satisfy a given need.
                           (European Organization for Quality Control Glossary 1981)

Although these definitions are all different, some common threads run through them:
     • Quality is a measure of the extent to which customer requirements and expectations
        are satisfied.
     • Quality is not static, since customer expectation can change.
     • Quality involves developing product or service specifications and standards to meet
        customer needs (quality of design) and then manufacturing products or providing
        services which satisfy those specifications and standards (quality of conformance).
It is important to note that the above quality definitions which are before 1950’s does not
refer to grade or features. For example Honda Car has more features and is generally
considered to be a higher grade car than Maruti. But it does not mean that it is of better
quality. A couple with two children may find that a Maruti does a much better job of
meeting their requirements in terms of case of loading and unloading ,comfort when the
entire family is in the car, gas mileage, maintenance, and of course ,basic cost of vehicle.




1.2.1        Traditional and Taguchi definition of Quality
7


Traditional Definition:
              The more traditional "Goalpost"
 mentality of what is considered good quality
 says that a product is either good or it isn't,
depending or whether or not it is within the
specification range (between the lower and
upper specification limits -- the goalposts).
With this approach, the specification range
is more important than the nominal (target)
value. But, is the product as good as it can               Traditional Quality Definition
be, or should be, just because it is within specification.

Taguchi Definition:
             Taguchi says no to above definition.
 He define the ‘quality’ as deviation from
on-target performance. According to him,
quality of a manufactured product is total
loss generated by that product to society
from the time it is shipped. Financial loss
or Quality loss
                                                            Taguchi Quality Definition
L(y) = k(y-m) ²

y        objective characteristic
m       target value
k      constant
k = Cost of defective product / (Tolerance) ²
  = A/Δ²




1.3                  Taguchi’s Quality Philosophy
8
Genichi Taguchi’s impact on the word quality scene has been far- reaching. His quality
engineering system has been used successfully by many companies in Japan and
elsewhere. He stresses the importance of designing quality into product into processes,
rather than depending on the more traditional tools of on-line quality control. Taguchi’s
approach differs from that of other leading quality gurus in that he focuses more on the
engineering aspects of quality rather than on management philosophy of statistics. Also,
Dr. Taguchi uses experimental design primarily as a tool to make products more robust- to
make them less sensitive to noise factors. That is, he views experimental design as a tool
for reducing the effects of variation on product and process quality characteristics. Earlier
applications of experimental design focused more on optimizing average product
performance characteristics without considering effects on variations. Taguchi’s quality
philosophy seven basic elements:

   1.   An important dimension of the quality of a manufactured product is the total loss
        generated by product to society. At a time when the BOTTOM LINE appears to be
        the driving force for so many organizations, it seems strange to see “loss to society”
        as part of product quality.

   2.   In a competitive economy, continuous quality improvement and cost reduction are
        necessary for staying in business. This is hard lesson to learn. Masaaki Imai (1986)
        argues very persuasively that the principal difference between Japanese and
        American management is that American companies look to new technologies and
        innovation as the major route to improvement, while Japanese companies focus
        more on “Kaizen” means gradual improvement in everything they do. Taguchi
        stresses use of experimental designs in parameter design as a way of reducing
        quality costs. He identifies three types quality costs: R & D costs, manufacturing
        costs, and operating costs. All three costs can be reduced through use of suitable
        experimental designs.

   3.   A continues quality improvement program includes continuous reduction in the
        variation of product performance characteristics about their target values. Again
        Kaizen. But with the focus on product and process variability. This does not fit the
        mold of quality being of conformance to specification.

   4.   The customer’s loss due to a product’s performance variation is often
        approximately proportional to the square of the deviation of the performance
        characteristic from its target value. This concept of a quadratic loss function, says
        that any deviation from target results in some loss of the customer, but that large
        deviations from target result in severe losses.

   5.   The final quality and cost of manufactured product are determined to a large extent
        by the engineering designs of the product and its manufacturing process. This is so
9
     simple, and so true. The future belongs to companies, which, once they understand
     the variability’s of their manufacturing processes using statistical process control,
     move their quality improvement efforts upstream to product and process design.

6.   A product (or process) performance variation can be reduced by exploiting the
     nonlinear effects of the product (or process) parameters on the performance
     characteristics. This is an important statement because its gets to the heart of off-
     line QC. Instead of trying to tighten speciation beyond a process capability, perhaps
     a change in design can allow specifications to be loosened. As an example, suppose
     that in a heating process the tolerance on temperature is a function of the heating
     time in the oven. The tolerance relationship is represents by the band in given figure.
     For example : If a process specification says the heating process is to last 4.5
     minutes, then the temperature must be held between 354.0 degrees and 355.0
     degrees, a tolerance interval 1.0 degrees wide. Perhaps the oven cannot hold this
     tight a tolerance. One solution would be spending a lot of money on a new oven and
     new controls. Other possibility would be to change the time for the heating process
     to, say, 3.5 minutes. Then the temperature would need to be held to between 358.0
     and 360.6 degrees an interval of width 2.6 degrees. If the oven could hold this
     tolerance, the most economical decision might be to adjust the specifications. This
     would make the process less sensitive to variation in oven temperature.




            Time Temperature Relationship



7.   Statistically designed experiments can be used to identify the settings of product
     parameters that reduce performance variations. And hence improve quality,
10
        productivity, performance, reliability, and profits, statistically designed experiments
        will be the strategic quality weapon of the 1990’s.




1.4                       Objective of Taguchi Methods
                         The objective of Taguchi’s efforts is process and product-design
improvement through the identifications of easily controllable factors and their settings,
which minimize the variation in product response while keeping the mean response on
target. By setting those factors at their optimal levels, the product can be made robust to
changes in operating and environmental conditions. Thus, more stable and higher-quality
products can be obtained, and this is achieved during Taguchi’ parameter-design stage by
removing the bad effect of the cause rather than the cause of the bad effect. Furthermore,
since the method is applied in a systematic way at a pre-production stage (off-line), it can
greatly reduce the number of time-consuming tests needed to determine cost-effective
process conditions, thus saving in costs and wasted products


1.5               8-Steps in Taguchi Methodology
1. Identify the main function, side effects and failure mode.

2. Identify the noise factor, testing condition and quality characteristics.

3. Identify the objective function to be optimized.

4. Identify the control factor and their levels.

5. Select the Orthogonal Array, Matrix experiments.

6. Conduct the Matrix equipment.

7.    Analyze the data; predict the optimum levels and performance.

8.    Perform the verification experiment and plan the failure action.
11




                            CHAPTER 2
2.1                       Taguchi Loss Function
Genichi Taguchi has an unusual definition for product quality: “Quality is the loss a
product causes to society after being shipped, other then any losses caused by its intrinsic
functions.” By “loss” Taguchi refers to the following two categories.
    • Loss caused by variability of function.
    • Loss caused by harmful side effects.
An example of loss caused by variability if function would be an automobile that does not
start in cold weather. The car’s owner would suffer a loss if he or she had to pay some to
start a car. The car owners employer losses the services of the employee who is now late
for work. An example of a loss caused by a harmful side effect would be frost by it
suffered by the owner of the car which would not start.
      An unacceptable product which is scrapped or rework prior to shipment is viewed by
Taguchi as a cost to the company but not a quality loss.
12




2.1.1 Comparing The Quality Levels of SONY TV Sets Made in
JAPAN and in SAN DIEGO
The front page of the Ashi News on April 17,1979 compared the quality levels of Sony
color TV sets made in Japanese plants and those made in San Diego, California, plant. The
quality characteristic used to compare these sets was the color density distribution, which
affect color balance. Although all the color TV sets had the same design, most American
customers thought that the color TV sets made in San Diego plant were of lower quality
than those made in Japan.
       The distribution of the quality characteristic of these color TV sets was given in the
Ashi News (shown in Figure).The quality characteristics of the TV sets from Japanese
Sony plants are normally disturbed around the target value m. If a value of 10 is assigned
to range of the tolerance specifications for this objective characteristic, then the standard
deviation of this normally distributed curve can be calculated and is about 10/6.
                   In quality control, the process capability index(Cp) is usually defined as
the tolerance specification divided by 6 times the standard deviation of the objective
characteristic:
                  Cp=Tolerance/6*Standard deviation
13




                  Therefore, the process capability index of the objective characteristic of
Japanese Sony TV sets is about 1. In addition, the mean value of the distribution of these
objective characteristics is very close to the target value of m.
             On the other hand, a higher percentage of TV sets from San Diego Sony are
within the tolerance limits than those from Japanese Sony. However, the color density of
San Diego TV sets is uniformly distributed rather than normally distributed. Therefore, the
standard deviation of these uniformly distributed objective characteristics is about 1/√12 of
the tolerance specification. Consequently, the process capability index of the San Diego
Sony plant is calculated as follows:

           Cp=Tolerance/6(Tolerance/√12) = 0.577

It is obvious that the process capability index of San Diego Sony is much lower than that
of Japanese Sony.
                            All products that are outside of the tolerance specifications are
supposed to be consider defective and not shipped out of the plant. Thus products that are
within tolerance specifications are assumed to be pass and are shipped. As a matter of fact,
tolerance specifications are very similar to tests in schools, where 60% is usually the
dividing line between passing and failing, and 100% is ideal score. In our example of ideal
TV sets, the ideal consideration is that the objective characteristics, color density, meet the
target value m. The more the color density deviates from the target value, the lower the
quality level of      TV set. If the deviation of color density is over the tolerance
specifications, m ±5, a TV set is considered defective. In the case of school test, 59% or
below is failing, while 60% or above is passing. Similarly, the grades between 60% and
100% in evaluating quality can be classified as follows:

                              60%-69%       Passing (D)
                              70%-79%       Fair (C)
                              80%-89%       Good (B)
14
                              90%-100% Excellent (A)

The “grades” D,C,B and A in parentheses above are quite commonly used in the United
States for categorizing the objective characteristics of products. Thus, one can apply this
scheme to the classification of the objective characteristics (color density) of these color
TV sets as shown in Fig. One can see that a very high percentage of Japanese Sony TV
sets are within grade B, and a very low percentage are within or below grade D. In
comparison, the color TV sets from San Diego SONY have about the same percentage in
grades A, B and C.
                  To reduce the difference in process capability indices between Japanese
SONY and San Diego SONY, (and thus seemingly increase the quality level of the San
Diego sets) the letter tried to tighten the tolerance specification to extend only to grade C
shown in Fig. rather than grade D. Therefore, Only the products within grades A,B and C
were treated as passing. But this approach is faulty, Tightening the tolerance specifications
because of a low process capability in a production plant is meaningless as increasing the
passing score of school tests from 60% to 70% just because students do not learn well. On
the contrary, such a school should consider asking the teachers to lower the passing score
for student who do not test as well instead of rating it. The next section will illustrate how
to evaluate the functional quality of products meaningfully and correctly.

                               Quadratic Loss Function

When an objective characteristic y deviates from its target value m, some financial loss
will occur. Therefore, the financial loss, sometimes referred to simply as quality loss or
used as an expression of quality level, can be assumed to be a function of y, which we
shall designate L(y),When y meets the target m, the of L(y) will be at minimum; generally,
the financial loss can be assumed to be zero under this ideal condition:

                       L (m) = 0                    ……………… Equation 2.1

Since the financial loss is at a minimum at the point, the first derivative of the loss function
with respect to y at this point should also be zero. Therefore, one obtains the following
equation:
                    L΄ (m) = 0                     …………….. Equation 2.2

If one expand the loss function L(y) through a Taylor series expansion around the target
value m and takes Equations (2.1) and (2.2) into consideration, one will get the following
equation:

L(y) = L(m) + L΄ (m)(y-m)/1!+L΄΄(m)(y-m) ²/2!+ ……………
     =L΄΄(y-m)²/2! +………
15
is result is obtained because the constant term L(m) and the first derivative term L’(m) are
both zero. In addition, the third order and higher order term are assumed to be negligible.
Thus, one can express the loss function as a squared term multiplied by constant k:

            L(y) =k(y-m)²                                 …………… Equation 2.3

When the deviation of the objective characteristic from the target value increases, the
corresponding quality loss will increase. When magnitude of deviation is outside of the
tolerance specifications, this product should be considered defective.
Let the cost due to defective product be A and the corresponding magnitude of the
deviation from the target value be Δ. Taking Δ into right hand side of Equation (2.3), one
can determine the value for constant k by following Equation:

                       k=cost of defective product/(tolerance)²

In the case of the SONY colour TV sets, let the adjustment cost be A= 300 Rs, when the
colour density is out of the tolerance specifications.

Therefore, the value of k can be calculated by the following equation:

k = 600/5² = 12 Rs

Therefore, the loss function is
                                  L(y) = 12(y – m)²

                      This equation is still valid even when only one unit of product is made.
Consider the visitor to the BHEL Heavy Electric Equipment Company in India who was
told, “In our company, only one unit of product needs to be made for our nuclear power
plant. In fact, it is not really necessary for us to make another unit of product. Since the
sample size is only one, the variance is zero. Consequently, the quality loss is zero and it is
not really necessary for us to apply statistical approaches to reduce the variance of our
product.”
                         However, the quality loss function [L = k(y – m)²] is defined as the
mean square deviation of objective characteristics from their target value, not the variance
of products. Therefore, even when only one product is made, the corresponding quality
loss can still be calculated by Equation (2.3). Generally, the mean square deviation of
objective characteristics from their target values can be applied to estimate the mean value
of quality loss in Equation (2.3). One can calculate the mean square deviation from target
σ² (σ² in this equation is not variance) by the following equation (the term σ² is also called
the mean square error or the variance of products):

 Quality of Sony TV sets where the tolerance specification is 10 and the
 objective function data corresponds to figure
16

Plant          Mean Value                                    Loss L          Defective
                               Standard
               of Objective                   Variation
Location       Function        Deviation                     (in Rs)         Ratio

Japan          M              10/6            10²/36         33              0.27%
San Diego      M              10/√12          10²/12         100             0.00




Substituting this equation into Equation (2.3), one gets the following equation:
                            L = kσ²
From Equation (2.4), one can evaluate the differences of average quality levels between
the TV sets from Japanese Sony and those from San Diego Sony as shown in Table 2.1.
From table 2.1 it is clear that although the defective ratio of the Japanese Sony is higher
than that of the san Deego Sony, the quality level of the former is 3 times higher than the
latter. Assume that one can tighten the tolerance specifications of the TV sets of San Diego
Sony to be m +- 10/3.
Also assume that these TV sets remain uniformly distributed after the tolerance
specifications are tightened. The average quality level of San Diego Sony TV sets would
be improved to the following quality level:
17

L = 12[(1/ √12) (10) (2/3)] ²          = 44Rs
 where the value of the loss function is considered the relative quality level of the product.
This average quality level of the TV sets of San Diego Sony is 56Rs higher than the
original quality level but still 11Rs lower than that of Japanese Sony TV sets. In addition,
in this type of quality improvement, one must adjust the products that are between the two
tolerance limits,m +- 10/3 and m+- 5, to be within m +- 10/3. In the uniform
distribution shown in Figure 2.1, 33.3% would need adjustment, which would cost 300Rs
per unit. Therefore each TV set from San Diego Sony would cost an additional 100Rs on
average for the adjustment:
                               (300)(0.333) = 100Rs
Consequently, it is not really a good idea to spend 100Rs more to adjust each product,
which is worth only 56Rs.A better way is to apply quality management methods to
improve the quality level of products.

2.2                    Variation of the Quadratic Loss Function

   1. Nominal the best type: Whenever the quality characteristic y has a finite target
      value, usually nonzero, and the quality loss is symmetric on the either side of the
      target, such quality characteristic called nominal-the-best type. This is given by
      equation

                L(y) =k(y-m)²                    ………………………Equation 1




           Color density of a television set and the out put voltage of a power supply
      circuit are examples of the nominal-the-best type quality characteristic.
   2. Smaller-the-better type: Some characteristic, such as radiation leakage from a
      microwave oven, can never take negative values. Also, their ideal value is equal to
      zero, and as their value increases, the performance becomes progressively worse.
      Such characteristic are called smaller-the-better type quality characteristics. The
      response time of a computer, leakage current in electronic circuits, and pollution
      from an automobile are additional examples of this type of characteristic. The
      quality loss in such situation can be approximated by the following function, which
      is obtain from equation by substituting m = 0

                           L(y) =ky²

      This is one side loss function because y cannot take negative values.
18




3. Larger-the-better type: Some characteristics, such as the bond strength of adhesives,
   also do not take negative values. But, zero is there worst value, and as their value
   becomes larger, the performance becomes progressively better-that is, the quality
   loss becomes progressively smaller. Their ideal value is infinity and at that point the
   loss is zero. Such characteristics are called larger-the-better type characteristics. It is
   clear that the reciprocal of such a characteristics has the same qualitative behavior as
   a smaller-the-better type characteristic. Thus we approximate the loss function for a
   larger-the-better type characteristic by substituting 1/y for y in
   equation1:
               L(y) = k [1/y²]




4. Asymmetric loss function: In certain situations, deviation of the quality characteristic
   in one direction is much more harmful than in the other direction. In such cases, one
   can use a different coefficient k for the two directions. Thus, the quality loss would
   be approximated by the following asymmetric loss function:


                                         k(y-m) ²,y>m
                            L(y) =       k(y-m) ², y≤m
19




                             CHAPTER 3

                   Introduction to Analysis of variation

3.1 Understanding variation
The purpose of product or process development is to improve the performance
characteristics of the product or process relative to customer’s needs and expectations.

The purpose of experimentation should be to reduce and control variation of a
product or a process; subsequently decisions must be made which parameters affect the
performance of a product/process.
Since variation is a large part of the discussion relative to quality, analysis of variation
(ANOVA) will be the statistical method used to interpret experimental data and make
necessary decisions.
3.2 What is ANOVA

ANOVA is a statistically based decision tool for detecting any differences in average
performance of groups of items tested.

ANOVA is a mathematical technique which breaks total variation down into accountable
sources; total variation is decomposed into its appropriate components

We will start with a very simple case and then build up more comprehensive situations

Thereafter, we will apply ANOVA to some very specialized experimental situations
3.2.1 No way ANOVA

Imagine an engineer is sent to the production line to sample a set of windshield pumps for
the purpose of measuring flow rate. The data collected is as under:

Pump No. 1            2         3          4         5         6         7         8
Flow rate
            5         6         8          2         5         4         4         6
oz/min.
 1 oz/min. = 0.473 ml/s
20

No-way ANOVA breaks total variation down into only two components

1. The variation of the average (or mean) of all the data points relative to zero
2. The variation of individual data points around the average (traditionally called
   experimental error)

The notations used in the calculation method are as under:

y = observation or response or simply data
 y i = ith response for example y3 = 8 oz/min
N = Total number of observations
T = Sum of all observations
T = Average of all observations = T/N = y
In this case
N = 8, T = 40 oz/min , and T = 5.0 oz/min

What is the reason for variation from pump to pump?
The true flow rate is actually unknown; it is only estimated through the use of some flow
meter. There will be some unknown measurement error present, but flow rate will
nonetheless be observed and accepted as the pump’s performance under the conditions of
the test.

Also, the pumps were randomly selected. Although the manufacturer produces identical
pumps; however there will be slight differences from pump to pump, causing a pump to
pump variation in performance. (This is natural variation of the process)

It is for this reason the flow rates of pumps are not identical.
No-way ANOVA can be illustrated graphically.

(Notes of this section must be taken by hand)

The magnitude of each observation can be represented by a line segment extending
from zero to the observation.
These line segments can be divided into two portions:
   - One portion attributed to the mean;
   - Other portion attributed to the error

The error includes the natural process variation and the measurement error.
  The magnitude of the line segment due to the mean is indicated by extending a line
  from the average value to zero.

   - The magnitude of line segment due to error is indicated by the difference of the
     average value from each observation.
21

To calculate the total variation present we will do a mathematical operation which will
allow a clearer picture to develop.

The magnitudes of each of the line segments can be squared and then summed to provide a
measure of the total variation present
SS T = total sum of squares = 5 2 + 6 2 + 8 2 + − − − − − + 6 2
SS T = 222.0

The magnitude of line segment due to mean can also be squared and summed

SS m = sum of squares due to mean = N (T ) 2

                                           T 
                                                  2
                                                        T2     40 2
But T = T / N                     SS m = N          =    =        = 200.0
                                           N          N       8
The portion of the magnitude of the line segment due to error can be squared and summed
to provide measure of the variation around the average value
                                         n
SSe = error sum of squares =           ∑i =1
                                               ( yi − y ) 2

SS e = 0 + 1 + 3 + (−3) + 0 2 + (−1) 2 + (−1) 2 + 12
              2       2       3    2


SSe = 22.0

Note that
222.0                 = 200.0 + 22.0

This demonstrates a basic property of ANOVA that the total sum of squares is equal to the
sum of squares due to known components

SST = SSm + SSE

The formulas for the sum of squares can be written generally
          N
SS T = ∑ y i2
          i =1


SS m =
        T2
          N
SS = ∑ ( y − T )
      N
                          2
  e               i
      I =1

In the above example the error value was calculated, but it is not necessary as

SSe = SST - SSm
22




3.2.1.1   Degrees of Freedom (dof)
To complete ANOVA calculations, one other element must be considered i.e. Degrees of
Freedom. The concept of dof is to allow 1 dof for each independent comparison that can
be made in the data.

Only 1 independent comparison can be made between the mean of all the data (There is
only 1 mean). Therefore, only 1 dof is associated with the mean.

The concept of dof also applies to the dof associated with the error estimate. With
reference to 8 observations, there are 7 independent comparisons that can be made to
estimate the variation in data. Data point 1 can be compared to 2, 2 to 3, 3 to 4 etc. which
are 7 independent comparisons.

One of the questions an instructor dreads most from an audience is,

"What exactly is degrees of freedom?"

It's not that there's no answer. The mathematical answer is a single phrase, "The rank of a
quadratic form."

It is one thing to say that degrees of freedom is an index and to describe how to calculate it
for certain situations, but none of these pieces of information tells what degrees of freedom
means.

At the moment, I'm inclined to define degrees of freedom as a way of keeping score.

A data set contains a number of observations, say, n. They constitute n individual pieces of
information. These pieces of information can be used either to estimate parameters (mean)
or variability.

In general, each item being estimated costs one degree of freedom. The remaining degrees
of freedom are used to estimate variability. All we have to do is count properly.

A single sample: There are n observations. There's one parameter (the mean) that needs to
be estimated. That leaves (n-1) degrees of freedom for estimating variability.
23
Two samples: There are n1 + n2 observations. There are two means to be estimated. That
leaves      (n1 + n2 − 2)    degrees of freedom for estimating variability.

Let v = dof
       vt      = Total dof

   vm = dof associated with mean (always 1 for each sample)
    ve = dof associated with error
Then vt = vm + ve              8=1+7
The total dof equals total number of observations in the data set for this method of
ANOVA
Summary of No-way ANOVA table:

Source                                SS                             Dof
Mean                                  200                            1
Error                                 22                             7
Total                                 222                            8

One other statistic calculated from ANOVA is variance V.

Error variance or just variance is
         SS e 22
Ve =         =   = 3.14
          ve   7
Also, sample standard deviation s = V


Where
        N

        ∑( y − y)
                        2
                i
s=      i =1
               N −1


          ∑ ( y − y ) = variance
                    i
                        2


s2   =V =
                N −1

Which is essentially
    SS
Ve = e
     ve
24
Although the formula above is faster than ANOVA for calculating error variance in this
case, but when the experimental situations become more complex ANOVA will become
faster method.

Error variance is a measure of the variation due to all the uncontrolled parameters,
including measurement error involved in a particular experiment (set of data collected).
Which is essentially the natural variation of a process



3.2.2 One – Way ANOVA
This is next most complex ANOVA to conduct.
This situation considers the effect of one controlled parameter upon the performance of
product or process, in contrast to no-way ANOVA, where no parameters were controlled.

Again let us try to solve this problem through an imaginary, yet potentially real situation.

Imagine the same engineer who took samples for flow rate of windshield pumps is charged
with the task of establishing the fluid velocity generated by the windshield washer pumps.

This means when the fluid velocity is too low, the fluid will merely dribble out, and if too
high the air movement past the windshield will not be able to distribute the cleaning fluid
adequately to satisfy the car driver.

The engineer proposes a test of three different orifice areas to determine which give a
proper fluid velocity.

Before the test data is collected some notation in order to simplify the mathematical
discussion is:

 A = Factor under investigation (outlet orifice area)
A1 = Ist level of orifice area = 0.0015 sq. in
A2 = 2nd level of orifice area = 0.0030 sq. in
A3 = 3rd level of orifice area = 0.0045 sq. in
The same symbol for the level designations will be used to denote the sum of responses
Ai = sum of observations under Ai level
Ai = Average of observations under Ai level =     Ai / n Ai
T = sum of all observations
T = Average of all observations = T/N
n Ai = number of observations under Ai level
k A = number of levels of factor A
25
With notation in mind, the engineer constructs four pumps with each given orifice area
(making 12 to test for three levels)
The test data is as under:
Level          Area sqin       Velocity Ft/s                              Total
 A1            0.0015          2.2        1.9       2.7       2.0         8.8
 A2            0.0030          1.5        1.9       1.7       -*          5.1
 A3            0.0045          0.6        0.7       1.1       0.8         3.2
Grand Total                                                                   17.1
* Dropped pump and destroyed it, no data
A1 = 8.8 ft/s    n A1 = 4            A1 = 2.2 ft/s
A2 = 5.1 ft/s    n A2 = 3            A2 = 1.7 ft/s
A3 = 3.2 ft/s          n A3 = 4           A3 = 0.8 ft/s
kA = 3
T = 17.1 ft/s     N = 11            T = 1.6 ft/s
Sum of squares (One-way ANOVA)
Two methods can be used to complete the calculations
   - Including the mean
   - Excluding the mean

Method 1 (Including the mean)
As before the total variation can be decomposed into its appropriate components:

 -   The variation of the mean of all observations relative to zero – VARIATION DUE
     TO MEAN
 -   The variation of the mean of observations under each factor level around the average
     (mean) of all observations –VARIATION DUE TO FACTOR A
 -   The variation of individual observations around the average of observations under
     each level – VARIATION DUE TO ERROR

The calculations are identical to No-way ANOVA example, except for the component of
variation due to factor A, outlet orifice area.
       N
SS T = ∑ y i2 = 2.2²+1.9²+ -----------+0.8² = 31.90
      i =1

                    T 2 = 17.1²/11 = 26.583
SS m = N (T ) 2 =
                     N

Graphically, also this can be shown (Pl make hand notes here for graphical
representation)
26
The magnitude of segments for each level of factor A is squared and summed. For
                                                                   (
instance, the length of the line segment due to level A1 is A1 − T .   )
There are four observations under A1 condition. The same type of information is collected
for other levels of factor A.
SS A = n A 1 ( A1 − T ) + n A 2 ( A2 − T ) + n A3 ( A3 − T )
                            2            2                     2




SS A = 4(0.64545)² + 3(0.14545) ² + 4(-0.75454) ² = 4.007

The above calculation is tedious and is mathematically equivalent to:

        k A  Ai2       T 2
SS A = ∑    
                         −
                        
        i =1  n Ai
                        N
                          
            8.8 2 5.12 3.2 2 17.12 = 4.007 which is same as above
SS A =           +    +     −
              4     3    4    11

The variation due to error is given by

                                2

SS e = ∑ ∑ ( y i − A j )
         k A n Ai


         j =1 i =1



SSe = 02 + ( − 0.3) + 0.52 + ( −0.2) 2 + − − − − 0.32 + 02
                        2


SS e = 0.600

Error variation is again based on method of least squares but in one way ANOVA the least
squares are evaluated around the average of each level of the controlled factor.

Error variation is the uncontrolled variation within the controlled group. Again the total
variation is

SST = SSm + SSA + SSe

31.190 = 26.583 + 4.007 + 0.600

Dof (Including the mean)
v t = vm + v A + ve
v t = 11,            vA = k A − 1 = 2
27
ve = 11 – 1 – 2 = 8

One way ANOVA summary (Method 1)

Source                   SS                   Dof v               Variance V
Mean m                   26.583               1                   26.583
Factor A                 4.007                2                   2.004
Uncontrolled Error e     0.600                8                   0.075
Total                    31.190               11

In the above table we have been able to estimate variance for both factor A and
uncontrolled error. This is what will be of interest to us when we design experiments.

Also, if look at the calculations done for Method 1, you will observe that mean does not
affect the calculations for the variation due to factor A and error in any way.

Thus in most experimental situations, mean has no practical value with the exception of
‘lower is better’ situations where the variation due to mean is a measure of how far the
average is from zero and how successful the factor might be in reducing the average to
zero.

Method 2: When we exclude the mean from ANOVA calculations, the total variation is
then calculated as:

   - The variation of average of observations under each factor level around the average
     of all observations
   - The variation of the individual observations around the average of observations
     under each factor level

Again graphically this can be demonstrated:

The same concept of summing the squares of the magnitudes of various line segments is
applied in method 2 as well.
         N
SST = ∑ ( yi − T ) 2 = 4.607
        i =1


Mathematically this is equivalent to

        N
                 T2
SST = ∑ ( yi ) −2

      i =1       N
28
This expression will be used to define the total variation by this method.

See this equivalent to      ( SS T − SS m ) from previous calculations


The variation due to factor A and uncontrolled error is calculated identically as in Method
1
        k A  Ai2      T 2
SS A = ∑    
                        −
                       
        i =1  n Ai
                       N
                         

         k A n Ai           2

SS e = ∑ ∑ ( y i − A j )
         j =1 i =1


Dof (Excluding the mean)
In method 1, dof was

vt   =   vm   + v A + ve
Where vm = 1 (always) and                         vt    =N

In Method 2(excluded mean), the dof for mean is subtracted from both sides of the above
equation

So
v t = N – 1 = 11 – 1 = 10
v t = vA + ve
10 = ( k A - 1) +      ve                                 so   ve = 8

One way ANOVA summary (Method 2)
Source               SS                                Dof v             Variance V
Factor A             4.007                             2                 2.004
Uncontrolled Error e 0.600                             8                 0.075
Total                4.607                             10

The values of variance for factor A and Error in both methods are identical. The value of
mean is disregarded in method 2 and is the most popular method.
29
Only when the performance parameter is ‘lower is better’ characteristic would the variance
due to mean be relevant; this provides a measure of how effective some factor might be in
reducing the average to zero.
Let us sum up the above discussion

Define three "Sums of Squares"

Total Corrected Sum of Squares (SST)
   • Squared deviations of observations from overall average

Error Sum of Squares (SSE)
   • Squared deviations of observations from treatment averages

Treatment Sum of Squares (SStrt)
   • Squared deviations of treatment averages from overall average (times n)
Dot Notation
           a                n

  y.. =   ∑ ∑
          i =1              j =1
                                     yij


   y.. = y../N (the overall average)
               n

  yi. =    ∑                yij
            j =1




  yi. = yi./n (the average within
                                    Treatment i)
                        a            n

Raw SS =               ∑ ∑
                       i =1          j =1
                                            yij2


Total Corrected SS
                   a          n


  SST = ∑ ∑ ( yij - yi. )2
               i =1          j =1




This measures the overall variability in the data.

SST/(N-1) is just the sample variance of the whole dataset
DECOMPOSITION OF SS

I will do (hopefully) derivation of the following equation:
30
     SST = SStrt + SSE
                 a            n                               2           a           n                                                      2
SS = ∑ ∑  y − y  = ∑ ∑  y − y + y − y 
                                                                                                                                      

  T         ij                ij   i.  i.     
             i =1 j =1                                                i =1 j = 1

     a       n                                                                  
                                                                                      2
= ∑ ∑   y − y  +  y − y  
                                                 

               ij  i.  i.
                                                                              
  i = 1 j =1
    a n                                         2            a       n                                    2        a   n                              
= ∑ ∑  y − y  + ∑ ∑  y − y  + ∑ ∑ 2  y − y  y − y  
                                                                                                                                                    

           ij  i.            i.                       ij  i.  i.                                                                                
  i = 1 j =1                                              i =1 j =1                                            i = 1 j =1
                          a                                   2           a           n                                                        
= SS + ∑ n y − y  + ∑ ∑ 2  y − y  y − y  
                                                                                                                        

    E         i.                 
                                          ij  i.  i.          
                         i =1                                         i =1 j =1


                                                           a          n                                                           
= SS + SS   +   2  y − y  y − y  
                                                                                                               

    E    TRT ∑ ∑   ij i.  i.  
                                                          i =1 j =1
= SS + SS    +0
    E    TRT
Must show last term is zero


 a       n                                                              
                                                                                                   a                            n
∑ ∑ 2  yij − yi.  yi. − y   = 2 ∑  yi. − y  ∑  yij − yi. 
                                                                                                                                             
                                                                                                                                                   

                                                                             
i =1 j =1                                                                                         i =1                          j =1
                                                                                       
                                                                                       
         a                             
                                                 n                                       
= 2 ∑ y − y ∑ y − ny
             
             
             
                                       
                                       
                                                                  
                                                                  
                                                                                          
                                                                                          
       i.    
              ij   i.                 
                                                                                       
                                                                                       
     i =1                               
                                        
                                             j =1                 
                                                                                         
                                                                                          

         a                                                                                        a
= 2 ∑  y − y ny − ny  = 2 ∑  y − y 0 = 0
          i.           i.   i.            i.        
     i =1                                                                                 i =1




Two –Way ANOVA
Two-way ANOVA is the next highest order of ANOVA to review

There are two controlled parameters in this experimental situation

Let us consider an experimental situation. A student worked at an aluminum casting
foundry which manufactured pistons for reciprocating engines.

The problem with the process was how to attain the proper hardness (Rockwell B) of the
casting for a particular product.

Engineers were interested in the effect of copper and magnesium content on casting
hardness.
31
According to specs the copper content could be 3.5 to 4.5% and the magnesium content
could be 1.2 to 1.8%.

The student runs an experiment to evaluate these factors and conditions simultaneously.

If A = % Copper Content          A1 = 3.5           A2 = 4.5

If B = % Magnesium Content      B1 = 1.2            B2 = 1.8

The experimental conditions for a two level factors is given by   2 f = 4 which are
A1 B1        A1 B2    A2 B1     A2 B2

Imagine, four different mixes of metal constituents are prepared, casting poured and
hardness measured. Two samples are measured from each mix for hardness. The result
will look like:

                               A1                                 A2
B1                             76, 78                          73, 74
B2                             77, 78                          79, 80

To simplify discussion 70 points from each value are subtracted in the above observations
from each of the four mixes. Transformed results can be shown as:
                               A1                              A2
B1                             6, 8                            3, 4
B2                             7, 8                            9, 10
Two way ANOVA calculations:

The variation may be decomposed into more components:
  1. Variation due to factor A
  2. Variation due to factor B
  3. Variation due to interaction of factors A and B
  4. Variation due to error
An equation for total variation can be written as
SS T = SS A + SS B + SS A×B + SS e
A x B represents the interaction of factor A and B. The interaction is the mutual effect of
Cu and Mg in affecting hardness.

Some preliminary calculations will speed up ANOVA
32

                            A1                    A2                   Total
B1                         6, 8                   3, 4                 21
B2                         7, 8                   9, 10                34
Total                      29                     26                   55

                                                                       Grand Total
A1   = 29             A2   = 26          B1 =21             B2 =34

T = 55,             N=8

n A1 = 4             nB1 = 4             n A2 = 4           nB 2 = 4
Total Variation
          N
                 T2
SST = ∑ ( yi ) −
               2

      i =1       N
                                                   55 2
              = 6² + 8² + 3² + ----------- + 10² -      = 40.875
                                                    8

Variation due to factor A
           k A  Ai2  T 2
SS A    = ∑          
                  n  − N
           i =1  Ai 
                       
                  2      2                     2
               A1     A2                    Ak     T2
                    +      + −− − − − − − +      −
               n A1   n A2                  n Ak   N

                29 2 26 2 552
  SS A        =     +    −    = 1.125
                 4    4    8

Please carry out a mathematical check here which is

Numerator 29 + 26 = 55 and
Denominator 4 + 4 = 8

If these conditions are not met then the SS A calculation will be wrong.

For a two level experiment, when the sample sizes are equal, the equation above can be
simplified to this special formula:
33
              ( A1 − A2 ) 2     ( 29 −26) 2
SS A =                       =              = 1.125
                  N                    8
Similarly the variation due to factor B


SS B       =
            (B1 −B2 )2           =
                                  ( 21 − ) 2
                                        34
                                             =21.125
                      N                8




To calculate the variation due to interaction of factors A and B

Let ( A × B) i represent the sum of data under the ith condition of the combination of factor A
and B. Also let c represent the number of possible combinations of the interacting factors
and   n( A×B )i   the number of data under this condition.

             c  ( A ×B ) 2      T 2
SS A×B       ∑ n
           = 
                 
                             i    −
                                  N
                                        −SS A −SS B
            i =  ( A×B ) i
             1                  

Note that when the various combinations are summed, squared, and divided by the number
of data points for that combination, the subsequent value also includes the factor main
effects which must be subtracted. While using this formula, all lower order interactions
and factor effects are to be subtracted.

For the example problem:
A1 B1 = 14, A1 B2 = 15,         A2 B1 = 7         A2 B2 = 19
And no. of possible combinations c = 4
And since there are two observations under each combination

n( A×B )i = 2

            14 2 7 2 15 2 19 2 55 2
SS A× B   =     +   +    +    −     − SS A − SS B = 15.125
             2    2   2    2    8

Since SS T = SS A +SS B +SS A×B +SS e

SS e = 40.875 −1.125 −21.125 −15.125 = 3.500
34
Degrees of Freedom (Dof) – Two way ANOVA
vt = v A + vB + v A×B + ve
vt   =N–1 =8–1=7
vA = k A   -1=1

vB = k B   -1=1
v A×B = (v A )(vB ) = 1
ve = 7 − 1 − 1 − 1 = 4


ANOVA summary Table (Two-way)

Source             SS                 Dof v       Variance V          F
A                  1.125              1           1.125               1.29
B                  21.125             1           21.125              24.14*
AxB                15.125             1           15.125              17.29**
E                  3.500              4           0.875
Total              40.875             7
* at 95% confidence
** at 90% confidence
The ANOVA results indicate that Cu by itself has no effect on the resultant hardness,
magnesium has a large effect (largest SS) on hardness and the interaction of Cu and Mg
plays a substantial part in determining hardness.

A plot of these data can be seen.

(Take hand notes here)
In this plot there exist non parallel lines which indicate the presence of an interaction. The
factor A effect depends on the level of factor B and vice versa. If the lines are parallel,
there would be no interaction which means the factor A effect would be same regardless of
the level of factor B.
35



Hardn
ess

                          B2


10




8




6




4
                               B1




2




                   A1                                 A2


Geometrically, there is some information available from the graph that may be useful. The
relative magnitudes of the various effects can be seen graphically. The B effect is the
largest, A x B effect next largest and the A effect is very small.

See
        29                                 26
A1 =       = 7.1                    A2 =      = 6.5
        4                                  4

        21                                 34
B1 =       = 5.2                    B2 =      = 8.5
        4                                   4
36



Hardn
ess



10

                                  B2



8                                      B effect                       A effect



6
                                                                                 Mid pt. A2B1
                                                                                 & A2B2
               Mid pt. on line   B1                    A x B effect
               B1
4




2




               A1                                 A2




So by plotting the data for each factor we could observe the following:

(Make hand notes here)
In the first case, there is no interaction because the lines are parallel

In 2nd case, some interaction

In 3rd case, there is a strong interaction

3.3 EXAMPLE OF ANOVA

During the late 1980s, Modi Xerox had a large base of customers (50 thousand)
for this model spread over the entire country. Many buyers of these machines
earn their livelihood by running copying services. Each of these buyers
ultimately serves a very large number of customers (end user). When copy
quality is either poor or inconsistent, customers earn a bad name and their
image and business gets affected. In the late 1980s, the company integrated the
total quality management philosophy into its operation and placed the highest
focus on customer satisfaction—any problem of field failure was given the
highest priority for investigation. The problem of skips was subjected to
37
detailed investigation by a cross-functional team from the Production,
Marketing, and Quality Assurance Departments.




Problem Description

The pattern of blurred images (skips) observed in the copy is shown in Figure
above. It usually occurs between 10-60 mm from the lead edge of the paper.
Sometimes, on a photocopy taken on a company letterhead paper, the company
logo gets blurred, which is not appreciated by the customer. This problem was
noticed in only one-third of the machines produced by the company, with the
remaining two-thirds of machines in the field working well without this
problem. The in-house test evaluation record also confirmed the problem in
only about 15% of the machines produced. Data analysis indicated that not all
the machines produced were faulty. Therefore, the focus of further
investigation was to find out what went wrong in the faulty machines or
whether there are basic differences between the components used in good and
faulty machines.

Preliminary Investigation
A copier machine consists of more than 1000 components and assemblies. A
brainstorming session by the team helped in the identification of 16
components suspected to be responsible for the problem of blurred images.
Each Suspected component had at least two possible dimensional characters
which could have resulted in the skip symptom. This led to more than 40
probable causes (40 dimensions arising out of 16 components) for the problem.
An attempt was made by the team to identify the real causes among these 40
probable causes. Ten bad machines were stripped open and various dimensions
of these 16 component were measured. It was observed that all the dimensions
were well within specifications Hence, this investigation did not give any clues
38
to the problem. Moreover, the time and effort spent in dismantling the faulty
machines and checking various dimensions in 16 components was in vain. This
gave rise to the thought that conforming to specification does nor always lead
to perfect quality. The team needed to think beyond the specification in order to
find a solution to the problem

Taguchi Experiment
An earlier brainstorming session had identified 16 components that were likely
to be the cause of this problem. A study of travel documents of 300 problem
machines revealed that on 88% of occasions, the problem was solved by
replacing one or more of only four parts of the machine. These four parts were
from the list of 16 parts identified earlier. They were considered to be Critical
and it was decided to conduct an experiment on these four parts. These parts
were the following:
(a) Drum shaft
(b) Drum gear
(c) Drum flange
(d) Feed shaft
Two sets of these parts Were taken for Experiment I, one from an identified
problem machine and one from a problem free machine. The two levels
considered in the experiment were good and bad; ‘good’ signifying parts from
the problem-free machine and ‘bad’ signifying parts from the problem
machine. The factors and levels thus identified are given in Table below.




A full factorial experiment would have required 16 trials while the experiment
was designed in L8(27) fractional factorial using a linear graph and orthogonal
array (OA) table developed by Taguchi.
39


The linear graph is presented in Fig. 9.24 and the layout in Table 9.14. A
master plan for conducting eight experiments was prepared. The response
considered was the number of defective copies (copies exhibiting the skip
problem) in a 50-copy run. The master plan along with the response is
presented in Table 9.15.
40




Analysis and Results

The response considered was fraction defective (p5 d/n). Data were normalized
by the transformation sin ‾ ¹ (p½). Analysis of variance (ANOVA) was
performed on normalized data and the results are presented in Table .




F(1,.5) at 0.05 = 6.61, F(1,.5) at 0.01=16.26.
pA=(3528-32.4)*100/3788 =923

 As can be seen from Table factor A is highly significant (the only significant
factor), explaining 92.3% of the total variation. In other words, of the four
components studied, the drum shaft alone is the source of trouble for skip. The
problem was now narrowed down to one component from the earlier list of 16
components, giving a ray of hope for moving towards a solution. Further
investigations were carried out on drum shaft design.


Drum Shaft Design
41
The configuration of the drum shaft is defined by 15 dimensions. A
brainstorming session by the team members identified wobbling and increased
play in the drum shaft as major causes for this problem. Four dimensions of the
drum shaft were suspected to be causing wobbling and excessive play. These
dimensions in all 20 machines (10 good and 10 bad) were checked and found to
be well within specification Now the question as to where the problem lay
arose-definitely not within the specification, perhaps outside the specification?
This led u to think beyond the specification in order to find a solution. As a
first step, the dimension patterns of good and bad machines were compared.
The dimension patterns for four critical dimensions suspected to be the cause of
the problem are show ft in Figure below.




There is not much difference in pattern between good and bad machines with
respect to dimensions B, C, and D. Dimension A, that is, diameter over pin
(DOP) dimensions of the drum shaft splines revealed a difference in pattern
between the good and bad machines. The DOP of shafts from 10 problem
machines were found to lie in the lower half of the specification range, whereas
in case of problem-free machines, the DOP was found to be always on the
42
upper half of the specification range(shown in fig. above). The DOP dimension
in the drum shaft is shown in Figure below.




DOP (diameter over pin) is a measure of the tooth thickness, t. Higher DOP
means greater tooth thickness of the splines and vice versa. If the DOP in the
splines of the drum shaft is on the lower side, then it will increase the
clearance, resulting in more play between the drum shaft and the drum gear
assembly .
43




Here, the image of the original document is transferred on the photoreceptor
drum through a series of lenses and mirrors. The photoreceptor drum is coated
with a photo-conductor material and it is electrically charged with positive
charge. During the transfer of the image from the document, the whole of the
drum area is exposed to light except the area where the image is formed, Due to
the exposure to light, the photo-conductor material becomes a conductor and
the charge is neutralized, except in the image area. This image is called ‘latent
image’. Subsequently, this image is transferred to paper through toner and
developer. During the transfer of image, the drum should rotate at a uniform
speed. Any jerk to the photoreceptor drum during rotation will cause distortion
or blur on the latent image. The photoreceptor drum is driven by the drum shaft
anti drum gear assembly. An excessive play between drum shaft and drum gear
gets magnified and produces jerks in the photoreceptor unit. Bad machine
dimension pattern clearly indicates the possibility of an excessive play between
drum shaft and drum gear assembly. A sketch of the Photoreceptor assembly is
shown below here.
44




A lower DOP results in Producing a large gap between the drum shaft and
drum gear which causes excessive play in the drum shaft. Technically,
excessive play between the drum and drive gear can cause the skip problem
This theory was further confirmed when this model (X) was compared with
model Y and model Z where no skip problem was observed. In models Y and
Z, the drum shaft and drive gear were integrated into a single unit. This
Probably explains the zero play and no skip defect. The drum and drive gear
assembly of the three models X, Y, and Z are shown in figure. for comparison




For further validation of this point, play between the drum shaft and drive gear
was eliminated by temporarily integrating the system using a drop of araldite
(glue) in 50 problem machines. A test run was pen on all 50 machine and no
skip defect was observe This led to the conclusion that drum shaft DOP
specifications are not fail-safe against skips. It was now felt necessary to arrive
at new specifications for DOP to ensure no excessive play between the drum
45
shaft and drive gear. The question arose as to how much play can be permitted.
To find an answer, a similar drive system of the very successful two-wheeler
Lambretta Scooter was studied, and it was found that the Play varied between
0.04 and 0.07 mm. To be on the safe side, it was decided to allow maximum
play of only 0.04 mm between the drum shaft and drive gear. These drum
shafts are manufactured by subcontractors, so new specifications were reached
by taking into consideration the supplier’ s capability of machining these
dimensions and a maximum permissible play of 0.04mm. The old and new
specifications for DOP are shown in figure.




Confirmatory Trial
The implications of the new specifications on other systems of the machine
were examined and it was found that the change in specification would not
create any problems. The 36 worst-affected machines were selected from the
field. Drum shafts with the new specifications were made and then fitted on
these machines. Test results of these machines showed a total elimination of
skip defects.
Ultimately, to give the customer the benefits of the study, 5000 drum shafts
with 30 new specifications were made and incorporated in 5000 existing
machines with the old design in the field. A sample performance audit of 800
machines (out of those 5000) in the field was carried out and none of these 800
machines indicated skip problems. This provided confidence that the new
design had worked successfully. After that, the new design was implemented
fully by releasing the new specification. The rate of occurrence of the skip
problem in the assembly line dropped from the previous 13% to less than 0.5%.

Beating the Benchmark
46
Machine specifications released from Rank Xerox (UK) permit the occurrence
of skip up to 10mm from the lead edge. Earlier specifications of Modi Xerox
permitted the occurrence of skip up to 60 mm from the lead edge, but, to most
of the customers, loss of
information near the lead edge is not acceptable, as
the company logos are located near the lead edge of the letterheads. The
exercise was initially taken up to reach the standard of Rank Xerox (skip up to
10mm). Now, the modified design of the drum shaft, evolved through scientific
and systematic investigation, has completely eliminated the skip and hence has
surpassed even the Rank Xerox benchmark of permitting skip up to 10 mm
from the lead edge. This is a great accomplishment towards skip-free copy. A
problem is completely solved for which no solution was previously available
worldwide.




                             CHAPTER 4

4.1                          WHAT IS ARRAY

An Array’s name indicates the number of rows and column it has , and also the number of
levels in each of the column. Thus the array L4(2³) has four rows and three 2-level column.

4.2                     HISTORY OF ORTHOGONAL ARRAY

Historically, related methods was developed for agriculture, largely in UK, around the
second world war ,Sir R.A.Fisher was particularly associated with this work .
                                                         F1 F2 F3 F4

Here the fields area has been divided             I1
up into rows and column and four fertilizers
(F1-F4) and four irrigation levels are
                                                  I2
Represented. Since all combination are            I3
Taken ,sixteen ‘cells’ or ‘plots’ result.         I4
47

The Fisher field experiment is a full factional experiment since all 4*4 =16 combinations
of the experiments factors ,fertilizer and irrigation level ,are included.
The number of combinations required may not be feasible or economic. To cut down on
the number of experimental combinations included, a Latin Square design of experiment
may be used. Here there are three fertilisers, three irrigation levels and three alternative
additives (A1-A2) but only nine instead of the 3*3*3 =27 combination, of the full factorial
are included.
There are ‘pivotal’ combinations, however, that still
allow the
                                                                 F F2 F3
identification of the best fertiliser, Irrigation level I1 A1         A2   A3
 and additive provided that these no serious
 non-additives or interactions in the relationship
                                                        I2 A2         A3   A1
 between yield and these control
 factors. The property of Latin Squares                      A3       A1   A2
 that corresponds to this separability is that each     I3
of the labels A1,A2,A3 appears exactly once in
 each row and column.

           Difference from agricultural applications is that in agriculture the ‘noise’
Or uncontrollable factors that disturb production also tend to disturb experimentation, such
as the weather. In industry, factors that disturb production, or are uneconomic to control in
production, can and should be directly manipulated in test. Our desire is to identify a
design or line calibration which can best survive the transient effects in the manufacturing
process caused by the uncontrolled factors. We wish to have small piece to piece and time
to time variations associated with this noise variation. To do this we can force diversity on
noise conditions by crossing our orthogonal array of controllable factors by full factorial or
orthogonal array of noise factors.
                                       Thus in the example, we evaluate our product for each
of the nine trials against the background of four different combinations of noise conditions.
We are looking one of the nine rows of control factors combinations, or for one of the
‘missing’ 72 rows ({3*3*3*3}=81; 81-9=72), which not only gives the correct mean result
on average, but also minimises variation away from the mean. To do this Taguchi
introduces the signal-to-noise ratio.

4.3 Introduction to Orthogonal Arrays

Engineers and Scientists are often faced with two product or process improvement
situations.
One development situation is to find a parameter that will improve some performance
characteristic to an acceptable and optimum value. This is the most typical situation in
most organizations.
A 2nd situation is to find a less expensive, alternate design, material, or method which will
provide equivalent performance
48

When searching for improved or equivalent deigns, the person typically runs some tests,
observe some performance of the product and makes a decision to use or reject the new
design.

In order to improve the quality of this decision, proper test strategies are utilized.
Before describing the OA’s let us look at some other test strategies:

Most common test plan is to evaluate the effect of one parameter on product performance.
This is what is typically called as one factor experiment.
This experiment evaluates the effect of one parameter while holding everything else
constant

The simplest case of testing the effect of one parameter on performance would be to run a
test at two different conditions of that parameter.

For example: the effect of cutting speed on the finish of a machined part. Two different
cutting speeds could be used and the resultant finish measured to determine which cutting
speed gave better results. If the first level, the first cutting speed, is symbolized by 1 and
the second level by 2, the experimental conditions will look like this:
Trial No.                       Factor Level                      Test Results
1                               1                                 *,*
2                               2                                 *,*
The * symbolizes the value of finish that would be obtained.

This sample of two (in this case) could be averaged and compared to the second test.

If there happens to be an interaction of this factor with some other factor then this
interaction cannot be studied.

Several Factors One at a Time

If this doesn’t work, the next progression is to evaluate the effect of several parameters on
product performance one at a time. Let us assume the experimenter has looked at four
different factors A, B, C and D each evaluated one at a time.
The resultant test program may appear like the table below:

                 Factor and Factor Level                                        Test Results
Trial No.        A             B                C               D
1                1             1                1               1               *,*
2                2             1                1               1               *,*
3                1             2                1               1               *,*
4                1             1                2               1               *,*
5                1             1                1               2               *,*
49
One can see that the first trial is the base line condition and results of trial 2 can be
compared to trial 1 to estimate the effect of factor A on product performance.

Similarly results of trial 3 can be compared to trial 1 to estimate the effect of factor B and
so on.

The main limitation of several factors one at a time is that no interaction among the factors
studied can be observed.

Also, the strategy makes limited use of data when evaluating factor effects. Of the ten data
points we had in the above example, only two were used to compare against two others;
and the remaining six data points were temporarily ignored.

If we try to use all the data points, then the experiment will not remain orthogonal. One
main requirement of orthogonality is a balanced experiment which means equal number of
samples under various test conditions. (Equal no. of tests under A1 and A2)

For instance, in the above experiment if all the data under A1 and A2 is averaged and
compared, then this is not a fair comparison of A1 to A2.

Of the four trials under level A1, three were level B1 and one at level B2. The one trial
under A2 was at level B1.

Therefore one cannot see that if factor B has an effect on the performance it will be part of
the observed effect of factor A, and vice versa.

Only when trial 1 is compared to other trials one at a time are the factor effects orthogonal.

Several factors all at the same time

The most desperate and urgent situations finds the experimenter evaluating effect of
several parameters on performance all at the same time.

Here the experimenter hopes that at least one of the changes will improve the situation
sufficiently.

                Factor and Factor Level                                      Test Results
Trial No.       A             B              C              D
1               1             1              1              1                *,*
2               2             2              2              2                *,*
This situation makes separation of main factor effects impossible, let alone any interaction
effects.
50
Some factors may be making positive effect and some negative, but we will not get any
hint of this information.

4.3.1 Investigating many factors – a case study

In most problems, preliminary brainstorming would reveal a large number of factors which
may influence the output of the process under study.

How are the effects of these factors prioritized?

The traditional approach is to
  - Isolate what is believed to be the most important factor
  - Investigate this factor by itself, ignoring all others
  - Make recommendations on changes to this crucial factor
  - Move on to the next factor and repeat

This OFAT (One factor at a time) approach has several critical weaknesses. The factorial
approach in which several factors are studied simultaneously in a balanced manner is
much better. We will try to understand this through an example.




4.3.1.1 Example
A process producing steel springs is generating considerable scrap due to cracking after
heat treatment. A study is planned to determine better operating conditions to reduce the
cracking problem.

There are several ways to measure cracking
  - Size of the crack
  - Presence or absence of cracks

The response selected was
Y: the percentage without cracks in a batch of 100 springs

Three major factors were believed to affect the response
   - T: Steel temperature before quenching
   - C: carbon content (percent)
   - O: Oil quenching temperature

Levels chosen for the study are:
Factor                         Low (Level 1)                 High (Level 2)
T                              1450 °F                       1600 °F
51
C                             0.5%                           0.7%
O                             70 °F                          120 °F

Classical approach : OFAT

Experiment: Four runs at each level of T with C and O at their low levels

Steel Tempt.        % springs without cracks                                Average
1450                61         67           68              66              65.5
1600                79         75           71              77              75.5
Conclusion: Increased T reduces cracks by 10%

Problem: How general is this conclusion? Does it depend upon
   - Quench Temperature?
   - Carbon Content?
   - Steel chemistry?
   - Spring type?
   - Analyst
   - Etc.??

Carrying out similar OFAT experiments for C and O would require a total of 24
observations and provide limited knowledge.


Factorial Approach:
   - Include all factors in a balanced design:
   - To increase the generality of the conclusions, use a design that involves all eight
      combinations of the three factors.
The treatments for the eight runs are given as under:
Run                     C                      T                    O
1                       0.5                    1450                 70
2                       0.7                    1450                 70
3                       0.5                    1600                 70
4                       0.7                    1600                 70
5                       0.5                    1450                 120
6                       0.7                    1450                 120
7                       0.5                    1600                 120
8                       0.7                    1600                 120
The above eight runs constitute a FULL FACTORIAL DESIGN. The design is balanced
for every factor. This means 4 runs have T at 1450 and 4 have T at 1600. Same is true for
C and O.
IMMEDIATE ADVANTAGES
   - The effect of each factor can be assessed by comparing the responses from the
      appropriate sets of four runs.
52
  - More general conclusions
  - 8 runs rather than 24 runs.
The data for the complete factorial experiment are:
Run                C                  T                 O                 Y
1                  0.5                1450              70                67
2                  0.7                1450              70                61
3                  0.5                1600              70                79
4                  0.7                1600              70                75
5                  0.5                1450              120               59
6                  0.7                1450              120               52
7                  0.5                1600              120               90
8                  0.7                1600              120               87

The main effects of each factor can be estimated by the difference between the average of
the responses at the high level and the average of the responses at the low level.
For example to calculate the O main effect:

                                 67 + 61 + 79 + 75
Avg. of responses with O as 70 =                   = 70.5
                                         4
                                  59 + 52 + 90 + 87
Avg. of responses with O as 120 =                    = 72
                                           4

So the main effect of O is =   72.0 − 70.5 = 1.5
53


Avg.
Y



74
                            O Main Effect

73




72




71




70




              70           O             120



The apparent conclusion is that changing the oil temperature from 70 to 120 has little
effect.
The use of factorial approach allows examination of two factor interactions. For example
we can estimate the effect of factor O at each level of T.

At T = 1450
                                   67 + 61
Avg. of responses with O as 70 =           = 64.0
                                      2
                                     59 + 52
Avg. of responses with O as 120 =            = 55.5
                                        2

So the effect of O is 55.5 – 64 = -8.5

At T = 1600
                                   79 + 75
Avg. of responses with O as 70 =           = 77.0
                                      2
                                     90 + 87
Avg. of responses with O as 120 =            = 88.5
                                        2

So the effect of O is 88.5 – 77 = 11.5

The conclusion is that at T = 1450, increasing O decreases the average response by 8.5
whereas at T = 1600, increasing O increases the average response by 11.5.
54
That is, O has a strong effect but the nature of the effect depends on the value of T.

This is called interaction between T and O in their effect on the response.

It is convenient to summarize the four averages corresponding to the four combinations of
T and O in a table:

                                    O
                                    70                 120                Average
                  1450              64                 55.5               59.75
T
                  1600              77                 88.5               82.75
Average                             70.5               72                 71.25



Respo
nse Y



90
                                            O = 120



80
                                            O = 70


70




60




50




           1450          T           1600




When an interaction is present the lines on the plot will not be parallel. When an
interaction is present the effect of the two factors must be considered simultaneously.

The lines are added to the plot only to help with the interpretation. We cannot know that
the response will increase linearly.

Two way tables of averages and plots for the other factor pairs are:
                                  C
                                  0.5                 0.7                 Average
                1450              63.0                56.5                59.75
T
                1600              84.5                81.0                82.75
Average                           73.75               68.75               71.25
55


Res Y




90
                                         C = 0.5



80
                                         C = 0.7


70




60




50




          1450         T          1600




                                  O
                                  70                120             Average
                 0.5              73                74.5            73.75
C
                 0.7              68                69.5            68.75
Average                           70.6              72              71.25




Conclusions:
   - C has little effect
   - There is an interaction between T and O.
 Recommendations:
   - Run the process with T and O at their high levels to produce about 90% crack free
      product (further investigation at other levels might produce more improvement).
   - Choose the level of C so that the lowest cost is realized.
Comparison with OFAT
On the basis of the observed data we can see that OFAT approach leads to different
conclusions if the factors are considered in the following order:

Fix T = 1450 and C = 0.5 and vary O, conclude O=70 is best
Run               C                T                  O                Y
1                 0.5              1450               70               67
2                 0.7              1450               70               61
3                 0.5              1600               70               79
4                 0.7              1600               70               75
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2
Seminar Report On Taguchi Methods2

Weitere ähnliche Inhalte

Andere mochten auch

Seminar report on taguchi method
Seminar report on taguchi methodSeminar report on taguchi method
Seminar report on taguchi methodghewarsinghbhati
 
Advanced Captcha Report
Advanced Captcha ReportAdvanced Captcha Report
Advanced Captcha ReportArpit Gupta
 
Manufacturing technology II 2 marks questions & answers
Manufacturing technology II 2 marks questions & answersManufacturing technology II 2 marks questions & answers
Manufacturing technology II 2 marks questions & answersGopinath Guru
 
Seminar report on third generation solid state drive
Seminar report on third generation solid state driveSeminar report on third generation solid state drive
Seminar report on third generation solid state driveAtishay Jain
 
Self efficacy, interests, and outcome expectations a holistic attitude assess...
Self efficacy, interests, and outcome expectations a holistic attitude assess...Self efficacy, interests, and outcome expectations a holistic attitude assess...
Self efficacy, interests, and outcome expectations a holistic attitude assess...IAEME Publication
 
Seminar Report (Final)
Seminar Report (Final)Seminar Report (Final)
Seminar Report (Final)Aruneel Das
 
A literature review on optimization of cutting parameters for surface roughne...
A literature review on optimization of cutting parameters for surface roughne...A literature review on optimization of cutting parameters for surface roughne...
A literature review on optimization of cutting parameters for surface roughne...IJERD Editor
 
Introduction to Apache Cassandra
Introduction to Apache CassandraIntroduction to Apache Cassandra
Introduction to Apache CassandraAran Deltac
 
Analysis of surface roughness on machining of al 5 cu alloy in cnc lathe machine
Analysis of surface roughness on machining of al 5 cu alloy in cnc lathe machineAnalysis of surface roughness on machining of al 5 cu alloy in cnc lathe machine
Analysis of surface roughness on machining of al 5 cu alloy in cnc lathe machineeSAT Journals
 
J4123 CNC TURNING NOTE
J4123 CNC TURNING NOTEJ4123 CNC TURNING NOTE
J4123 CNC TURNING NOTEmsharizan
 
Taguchi method-process imp
Taguchi method-process impTaguchi method-process imp
Taguchi method-process impTushar Rawat
 
full report final project btech
full report final project btechfull report final project btech
full report final project btechanish malan
 
Optimization of edm process parameters using taguchi method a review
Optimization of edm process parameters using taguchi method  a reviewOptimization of edm process parameters using taguchi method  a review
Optimization of edm process parameters using taguchi method a revieweSAT Journals
 
Computer numerical controlled machine project
Computer numerical controlled machine projectComputer numerical controlled machine project
Computer numerical controlled machine projectMuhammad Zaighum Farooq
 
Taguchi design of experiments nov 24 2013
Taguchi design of experiments nov 24 2013Taguchi design of experiments nov 24 2013
Taguchi design of experiments nov 24 2013Charlton Inao
 
a seminar report on geothermal energy
a seminar report on geothermal energya seminar report on geothermal energy
a seminar report on geothermal energyanil kumar
 
OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...
OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...
OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...AVINASH JURIANI
 
Optimization of tig welding using taguchi and regression analysis
Optimization of tig welding using taguchi and regression analysisOptimization of tig welding using taguchi and regression analysis
Optimization of tig welding using taguchi and regression analysisvivek bisht
 

Andere mochten auch (20)

Seminar report on taguchi method
Seminar report on taguchi methodSeminar report on taguchi method
Seminar report on taguchi method
 
Advanced Captcha Report
Advanced Captcha ReportAdvanced Captcha Report
Advanced Captcha Report
 
19897
1989719897
19897
 
Manufacturing technology II 2 marks questions & answers
Manufacturing technology II 2 marks questions & answersManufacturing technology II 2 marks questions & answers
Manufacturing technology II 2 marks questions & answers
 
Seminar report on third generation solid state drive
Seminar report on third generation solid state driveSeminar report on third generation solid state drive
Seminar report on third generation solid state drive
 
Self efficacy, interests, and outcome expectations a holistic attitude assess...
Self efficacy, interests, and outcome expectations a holistic attitude assess...Self efficacy, interests, and outcome expectations a holistic attitude assess...
Self efficacy, interests, and outcome expectations a holistic attitude assess...
 
Seminar Report (Final)
Seminar Report (Final)Seminar Report (Final)
Seminar Report (Final)
 
A literature review on optimization of cutting parameters for surface roughne...
A literature review on optimization of cutting parameters for surface roughne...A literature review on optimization of cutting parameters for surface roughne...
A literature review on optimization of cutting parameters for surface roughne...
 
Introduction to Apache Cassandra
Introduction to Apache CassandraIntroduction to Apache Cassandra
Introduction to Apache Cassandra
 
Analysis of surface roughness on machining of al 5 cu alloy in cnc lathe machine
Analysis of surface roughness on machining of al 5 cu alloy in cnc lathe machineAnalysis of surface roughness on machining of al 5 cu alloy in cnc lathe machine
Analysis of surface roughness on machining of al 5 cu alloy in cnc lathe machine
 
J4123 CNC TURNING NOTE
J4123 CNC TURNING NOTEJ4123 CNC TURNING NOTE
J4123 CNC TURNING NOTE
 
Taguchi method-process imp
Taguchi method-process impTaguchi method-process imp
Taguchi method-process imp
 
Mp 2 report
Mp 2 reportMp 2 report
Mp 2 report
 
full report final project btech
full report final project btechfull report final project btech
full report final project btech
 
Optimization of edm process parameters using taguchi method a review
Optimization of edm process parameters using taguchi method  a reviewOptimization of edm process parameters using taguchi method  a review
Optimization of edm process parameters using taguchi method a review
 
Computer numerical controlled machine project
Computer numerical controlled machine projectComputer numerical controlled machine project
Computer numerical controlled machine project
 
Taguchi design of experiments nov 24 2013
Taguchi design of experiments nov 24 2013Taguchi design of experiments nov 24 2013
Taguchi design of experiments nov 24 2013
 
a seminar report on geothermal energy
a seminar report on geothermal energya seminar report on geothermal energy
a seminar report on geothermal energy
 
OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...
OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...
OPTIMIZATION OF MACHINING PARAMETERS WITH TOOL INSERT SELECTION FOR S355J2G3 ...
 
Optimization of tig welding using taguchi and regression analysis
Optimization of tig welding using taguchi and regression analysisOptimization of tig welding using taguchi and regression analysis
Optimization of tig welding using taguchi and regression analysis
 

Ähnlich wie Seminar Report On Taguchi Methods2

Mobile+interaction+design
Mobile+interaction+designMobile+interaction+design
Mobile+interaction+designdinoyui
 
A Report on Taguchi Methods / Techniques - KAUSTUBH BABREKAR
A Report on Taguchi Methods / Techniques - KAUSTUBH BABREKARA Report on Taguchi Methods / Techniques - KAUSTUBH BABREKAR
A Report on Taguchi Methods / Techniques - KAUSTUBH BABREKARKaustubh Babrekar
 
Automatic photograph orientation
Automatic photograph orientationAutomatic photograph orientation
Automatic photograph orientationHugo King
 
Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...
Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...
Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...PMI Capítulo México
 
A Parallel Implementation of the Element-Free Galerkin Method on a Network of...
A Parallel Implementation of the Element-Free Galerkin Method on a Network of...A Parallel Implementation of the Element-Free Galerkin Method on a Network of...
A Parallel Implementation of the Element-Free Galerkin Method on a Network of...Dr. Thiti Vacharasintopchai, ATSI-DX, CISA
 
Technology Acceptance As a Trigger for Successful Virtual Project Management:...
Technology Acceptance As a Trigger for Successful Virtual Project Management:...Technology Acceptance As a Trigger for Successful Virtual Project Management:...
Technology Acceptance As a Trigger for Successful Virtual Project Management:...Bernhard Hofer
 
Exploring tools for expressive voice
Exploring tools for expressive voiceExploring tools for expressive voice
Exploring tools for expressive voiceeilatann
 
Sustainability and food
Sustainability and foodSustainability and food
Sustainability and foodAnhe Guo
 
ISO/IEc 15504/SPICE Status
ISO/IEc 15504/SPICE StatusISO/IEc 15504/SPICE Status
ISO/IEc 15504/SPICE StatusAlec Dorling
 
Communicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and AppearanceCommunicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and AppearanceSeos Design
 
Communicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and AppearanceCommunicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and AppearanceAntti Pitkänen
 
Workshop theory and_practice
Workshop theory and_practiceWorkshop theory and_practice
Workshop theory and_practicephysics101
 
Wt4603 lecture worksheet4_01-10-2010
Wt4603 lecture worksheet4_01-10-2010Wt4603 lecture worksheet4_01-10-2010
Wt4603 lecture worksheet4_01-10-2010JOE LYSTER
 
Social Aspects of Aircraft use in Aoraki/Mt Cook National Park
Social Aspects of Aircraft use in Aoraki/Mt Cook National Park Social Aspects of Aircraft use in Aoraki/Mt Cook National Park
Social Aspects of Aircraft use in Aoraki/Mt Cook National Park Magnus Kjeldsberg
 
IRJET- Fabric Defect Detection using Discrete Wavelet Transform
IRJET- Fabric Defect Detection using Discrete Wavelet TransformIRJET- Fabric Defect Detection using Discrete Wavelet Transform
IRJET- Fabric Defect Detection using Discrete Wavelet TransformIRJET Journal
 

Ähnlich wie Seminar Report On Taguchi Methods2 (20)

Mobile+interaction+design
Mobile+interaction+designMobile+interaction+design
Mobile+interaction+design
 
A Report on Taguchi Methods / Techniques - KAUSTUBH BABREKAR
A Report on Taguchi Methods / Techniques - KAUSTUBH BABREKARA Report on Taguchi Methods / Techniques - KAUSTUBH BABREKAR
A Report on Taguchi Methods / Techniques - KAUSTUBH BABREKAR
 
Automatic photograph orientation
Automatic photograph orientationAutomatic photograph orientation
Automatic photograph orientation
 
Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...
Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...
Webinar de la Comunidad de Implementación de Software: Métricas de Calidad de...
 
A Parallel Implementation of the Element-Free Galerkin Method on a Network of...
A Parallel Implementation of the Element-Free Galerkin Method on a Network of...A Parallel Implementation of the Element-Free Galerkin Method on a Network of...
A Parallel Implementation of the Element-Free Galerkin Method on a Network of...
 
Technology Acceptance As a Trigger for Successful Virtual Project Management:...
Technology Acceptance As a Trigger for Successful Virtual Project Management:...Technology Acceptance As a Trigger for Successful Virtual Project Management:...
Technology Acceptance As a Trigger for Successful Virtual Project Management:...
 
Exploring tools for expressive voice
Exploring tools for expressive voiceExploring tools for expressive voice
Exploring tools for expressive voice
 
Sustainability and food
Sustainability and foodSustainability and food
Sustainability and food
 
ISO/IEc 15504/SPICE Status
ISO/IEc 15504/SPICE StatusISO/IEc 15504/SPICE Status
ISO/IEc 15504/SPICE Status
 
Communicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and AppearanceCommunicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and Appearance
 
Communicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and AppearanceCommunicating Environmental Friendliness through Product Design and Appearance
Communicating Environmental Friendliness through Product Design and Appearance
 
Manual t(se)
Manual t(se)Manual t(se)
Manual t(se)
 
02 design process
02 design process02 design process
02 design process
 
Workshop theory and_practice
Workshop theory and_practiceWorkshop theory and_practice
Workshop theory and_practice
 
Wt4603 lecture worksheet4_01-10-2010
Wt4603 lecture worksheet4_01-10-2010Wt4603 lecture worksheet4_01-10-2010
Wt4603 lecture worksheet4_01-10-2010
 
Lakhotia09
Lakhotia09Lakhotia09
Lakhotia09
 
Social Aspects of Aircraft use in Aoraki/Mt Cook National Park
Social Aspects of Aircraft use in Aoraki/Mt Cook National Park Social Aspects of Aircraft use in Aoraki/Mt Cook National Park
Social Aspects of Aircraft use in Aoraki/Mt Cook National Park
 
Bracha2003 marcol
Bracha2003 marcolBracha2003 marcol
Bracha2003 marcol
 
Report
ReportReport
Report
 
IRJET- Fabric Defect Detection using Discrete Wavelet Transform
IRJET- Fabric Defect Detection using Discrete Wavelet TransformIRJET- Fabric Defect Detection using Discrete Wavelet Transform
IRJET- Fabric Defect Detection using Discrete Wavelet Transform
 

Mehr von pulkit bajaj

guru gobind singh ji
guru gobind singh jiguru gobind singh ji
guru gobind singh jipulkit bajaj
 
guru teg bhadaur ji
guru teg bhadaur jiguru teg bhadaur ji
guru teg bhadaur jipulkit bajaj
 
guru gobind singh ji
guru gobind singh jiguru gobind singh ji
guru gobind singh jipulkit bajaj
 
guru teg bhadaur ji
guru teg bhadaur jiguru teg bhadaur ji
guru teg bhadaur jipulkit bajaj
 
guru harkrishan ji
guru harkrishan jiguru harkrishan ji
guru harkrishan jipulkit bajaj
 
introduction to Cast Iron
introduction to Cast Ironintroduction to Cast Iron
introduction to Cast Ironpulkit bajaj
 

Mehr von pulkit bajaj (13)

guru gobind singh ji
guru gobind singh jiguru gobind singh ji
guru gobind singh ji
 
guru teg bhadaur ji
guru teg bhadaur jiguru teg bhadaur ji
guru teg bhadaur ji
 
guru gobind singh ji
guru gobind singh jiguru gobind singh ji
guru gobind singh ji
 
guru teg bhadaur ji
guru teg bhadaur jiguru teg bhadaur ji
guru teg bhadaur ji
 
guru harkrishan ji
guru harkrishan jiguru harkrishan ji
guru harkrishan ji
 
guru har rai ji
guru har rai jiguru har rai ji
guru har rai ji
 
guru hargobind ji
guru hargobind jiguru hargobind ji
guru hargobind ji
 
guru arjan dev ji
guru arjan dev jiguru arjan dev ji
guru arjan dev ji
 
guru ram das ji
guru ram das jiguru ram das ji
guru ram das ji
 
guru amar das ji
guru amar das jiguru amar das ji
guru amar das ji
 
guru angad dev ji
guru angad dev jiguru angad dev ji
guru angad dev ji
 
guru nanak dev ji
guru nanak dev jiguru nanak dev ji
guru nanak dev ji
 
introduction to Cast Iron
introduction to Cast Ironintroduction to Cast Iron
introduction to Cast Iron
 

Kürzlich hochgeladen

Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 

Kürzlich hochgeladen (20)

Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 

Seminar Report On Taguchi Methods2

  • 1. 1 Seminar Report On Taguchi’s Methods Submitted by: Submitted to: Ishwar Chander (800982007) Mr. Tarun Nanda Pulkit Bajaj (800982019) (Sr. Lecturer) Hitesh Bansal (800982006) Department of Mechanical Engineering Thapar University, Patiala (Deemed University)
  • 2. 2 CONTENTS Chapter 1 1.1Introduction 4 1.2 Definitions of quality 5 1.2.1 Traditional and Taguchi definition of Quality 6 1.3Taguchi’s quality philosophy 7 1.4 Objective of Taguchi Methods 9 1.5 8-Steps in Taguchi Methodology 9 Chapter 2 (Loss Function) 10 2.1 Taguchi Loss Function 10 2.2 Variation of Quadratic Loss function 16 Chapter 3 (Analysis of Variation) 18 3.1 Understanding Variation 18 3.2 What is ANOVA 18 3.2.1 No Way ANOVA 18 3.2. 1.1 Degree of Freedom 19 3.2.2 One Way ANOVA 23 3.2.3 Two Way ANOVA 29 3.3 Example of ANOVA 35 Chapter 4 (Orthogonal Array) 45 4.1 What is Array 45 4.2 History of Array 45 4.3 Introduction of Orthogonal Array 46 4.3.1 Intersecting many factor- A case study 48 4.3.1.1 Example of Orthogonal Array 49
  • 3. 3 4.3.2 A Full factorial Experiment 57 4.4 Steps in developing Orthogonal Array 59 4.4.1 Selection of factors and/or interactions to be evaluated 59 4.4.2 Selection of number of levels for the factors 59 4.4.3 Selection of the appropriate OA 60 4.4.4 Assignment of factors and/or interactions to columns 61 4.4.5 Conduct tests 63 4.4.6 Analyze results 64 4.4.7 Confirmation experiment 67 4.5 Example Experimental Procedure 67 4.6 Standard Orthogonal Array 71 Chapter 5 (Robust Designing) 5.1 What is robustness 72 5.2 The Robustness Strategy uses five primary tools 72 5.2.1P-Diagram 73 5.2.2 Quality Measurement 74 5.2.3 Signal To Noise Ratio 75 5.3 Steps in Robust Para design 76 5.4 Noise Factor 77 5.5 OFF-LINE and ON-LINE Quality Control 78 5.5.1 OFF-LINE Quality Control 78 5.5.2 ON-LINE Quality Control 78 5.5.1.1 Product Design 79 5.5.1.2 Process Design 79 5.5.2.1 Product Quality Control method (On Line Quality Control Stage 1) 80 5.5.2.2 Customer Relations (On Line Quality Control Stage 2) 80 References 84
  • 4. 4 Preface When Japan began its reconstruction efforts after World War II, Japan faced an acute shortage of good quality raw material, high quality manufacturing equipment and skilled engineers. The challenge was to produce high quality products and continue to improve quality product under these circumstances. The task of developing a methodology to meet the challenge was assigned to Dr. Genichi Taguchi, who at that time was manager in charge of developing certain telecommunication products at the electrical communication laboratories (ECL) of Nippon Telecom and Telegraph Company(NTT).Through his research in the 1950’s and the early 1960’s. Dr Taguchi developed the foundation of robust design and validated his basics philosophies by applying them in the development of many products. In recognisation of this contribution, Dr Taguchi received the individual DEMING AWARD in 1962, which is one of the highest recognition in the quality field. CHAPTER 1
  • 5. 5 1.1 Introduction Genichi Taguchi attended Kiryu Technical College where he studied textile engineering. From 1942 to 1945, he served in the Astronomical Department of the Navigation Institute of the Imperial Japanese Navy. After that, he worked in the Ministry of Public Health and Welfare and the Institute of Statistical Mathematics, Ministry of Education. While working there, he was educated by Matosaburo Masuyama on the use of orthogonal arrays and also on different experimental design techniques. In 1950, he began working at the newly formed Electrical Communications Laboratory of the Nippon Telephone and Telegraph Company. He stayed there for more than 12 years and was responsible for training engineers to be more effective with their techniques. While he was there he consulted with many different Japanese companies and also wrote his first book on orthogonal arrays. He served as a visiting Professor at the Indian Statistical Institute from 1954 to 1955. While he was there, Taguchi met Sir R.A. Fisher and Walter A. Shewhart. He published the first edition of his two-volume book on Experimental design in 1958. He made his first visit to the United States in 1962 where he was a visiting Professor at Princeton University. In the same year, he was also awarded his PhD from Kyushu University. He developed the concept of the Quality Loss Function in the 1970’s. He also published the third and most current edition of his book on experimental designs. He revisited the United States in 1980 and from then his methods spread and became more widely used. Genichi Taguchi made many important contributions during his lifetime. Some of his most important were probably to the field of quality control. However he did make many important contributions to experimental design. Professor Genichi Taguchi was the director of the Japanese Academy of quality and four times receipt of the Deming Prize. The term Taguchi Methods was coined in the United States. Although SPC can assist the operator in the elimination of the special cause of defects, thus bringing the process under control. But some thing is still needed: the continuous improvement of manufacturing processes so that the production of robust products can be assured. And this is where Taguchi comes in. He starts where SPC (temporarily) finishes. He can help with the identification of common cause of variation, the most difficult to determine and eliminate in process. He attempts to go even further: he tries to make the process and the product robust against their effect (eliminate of the effect rather then the cause) at the design stage; indeed, in dealing with uncontrollable (noise) factors, there is no alternative. Even if the removal of the effect is impossible, he provides a systematic procedure for controlling the noise (through tolerance design) at the minimum cost. When Dr. Taguchi was first brought his ideas to America in 1980, he was already well known in Japan for his contribution to quality engineering. His arrival in the U.S. went virtually unnoticed, but by 1984 his ideas had generated so much interest that Ford Motors Company sponsored the first supplier Symposium on Taguchi Methods.
  • 6. 6 1.2 Definitions of Quality  “Fitness for use” Dr. Juran (1964)  The leading promoter of the “zero defects” concept and author of Quality is Free (1979) defines quality as” Conformance to requirements. Philips Crosby  Quality should be aimed at needs of consumer, present and future. Dr. Deming  The totality of features and characteristics of a product or service that bear on its ability to satisfy given needs. The American Society for Quality Control (1983)  The aggregate of properties of a product determining its ability to satisfy the needs it was built to satisfy. (Russian Encyclopaedia)  The totality of features and characteristics of a product and service that bear on its ability to satisfy a given need. (European Organization for Quality Control Glossary 1981) Although these definitions are all different, some common threads run through them: • Quality is a measure of the extent to which customer requirements and expectations are satisfied. • Quality is not static, since customer expectation can change. • Quality involves developing product or service specifications and standards to meet customer needs (quality of design) and then manufacturing products or providing services which satisfy those specifications and standards (quality of conformance). It is important to note that the above quality definitions which are before 1950’s does not refer to grade or features. For example Honda Car has more features and is generally considered to be a higher grade car than Maruti. But it does not mean that it is of better quality. A couple with two children may find that a Maruti does a much better job of meeting their requirements in terms of case of loading and unloading ,comfort when the entire family is in the car, gas mileage, maintenance, and of course ,basic cost of vehicle. 1.2.1 Traditional and Taguchi definition of Quality
  • 7. 7 Traditional Definition: The more traditional "Goalpost" mentality of what is considered good quality says that a product is either good or it isn't, depending or whether or not it is within the specification range (between the lower and upper specification limits -- the goalposts). With this approach, the specification range is more important than the nominal (target) value. But, is the product as good as it can Traditional Quality Definition be, or should be, just because it is within specification. Taguchi Definition: Taguchi says no to above definition. He define the ‘quality’ as deviation from on-target performance. According to him, quality of a manufactured product is total loss generated by that product to society from the time it is shipped. Financial loss or Quality loss Taguchi Quality Definition L(y) = k(y-m) ² y objective characteristic m target value k constant k = Cost of defective product / (Tolerance) ² = A/Δ² 1.3 Taguchi’s Quality Philosophy
  • 8. 8 Genichi Taguchi’s impact on the word quality scene has been far- reaching. His quality engineering system has been used successfully by many companies in Japan and elsewhere. He stresses the importance of designing quality into product into processes, rather than depending on the more traditional tools of on-line quality control. Taguchi’s approach differs from that of other leading quality gurus in that he focuses more on the engineering aspects of quality rather than on management philosophy of statistics. Also, Dr. Taguchi uses experimental design primarily as a tool to make products more robust- to make them less sensitive to noise factors. That is, he views experimental design as a tool for reducing the effects of variation on product and process quality characteristics. Earlier applications of experimental design focused more on optimizing average product performance characteristics without considering effects on variations. Taguchi’s quality philosophy seven basic elements: 1. An important dimension of the quality of a manufactured product is the total loss generated by product to society. At a time when the BOTTOM LINE appears to be the driving force for so many organizations, it seems strange to see “loss to society” as part of product quality. 2. In a competitive economy, continuous quality improvement and cost reduction are necessary for staying in business. This is hard lesson to learn. Masaaki Imai (1986) argues very persuasively that the principal difference between Japanese and American management is that American companies look to new technologies and innovation as the major route to improvement, while Japanese companies focus more on “Kaizen” means gradual improvement in everything they do. Taguchi stresses use of experimental designs in parameter design as a way of reducing quality costs. He identifies three types quality costs: R & D costs, manufacturing costs, and operating costs. All three costs can be reduced through use of suitable experimental designs. 3. A continues quality improvement program includes continuous reduction in the variation of product performance characteristics about their target values. Again Kaizen. But with the focus on product and process variability. This does not fit the mold of quality being of conformance to specification. 4. The customer’s loss due to a product’s performance variation is often approximately proportional to the square of the deviation of the performance characteristic from its target value. This concept of a quadratic loss function, says that any deviation from target results in some loss of the customer, but that large deviations from target result in severe losses. 5. The final quality and cost of manufactured product are determined to a large extent by the engineering designs of the product and its manufacturing process. This is so
  • 9. 9 simple, and so true. The future belongs to companies, which, once they understand the variability’s of their manufacturing processes using statistical process control, move their quality improvement efforts upstream to product and process design. 6. A product (or process) performance variation can be reduced by exploiting the nonlinear effects of the product (or process) parameters on the performance characteristics. This is an important statement because its gets to the heart of off- line QC. Instead of trying to tighten speciation beyond a process capability, perhaps a change in design can allow specifications to be loosened. As an example, suppose that in a heating process the tolerance on temperature is a function of the heating time in the oven. The tolerance relationship is represents by the band in given figure. For example : If a process specification says the heating process is to last 4.5 minutes, then the temperature must be held between 354.0 degrees and 355.0 degrees, a tolerance interval 1.0 degrees wide. Perhaps the oven cannot hold this tight a tolerance. One solution would be spending a lot of money on a new oven and new controls. Other possibility would be to change the time for the heating process to, say, 3.5 minutes. Then the temperature would need to be held to between 358.0 and 360.6 degrees an interval of width 2.6 degrees. If the oven could hold this tolerance, the most economical decision might be to adjust the specifications. This would make the process less sensitive to variation in oven temperature. Time Temperature Relationship 7. Statistically designed experiments can be used to identify the settings of product parameters that reduce performance variations. And hence improve quality,
  • 10. 10 productivity, performance, reliability, and profits, statistically designed experiments will be the strategic quality weapon of the 1990’s. 1.4 Objective of Taguchi Methods The objective of Taguchi’s efforts is process and product-design improvement through the identifications of easily controllable factors and their settings, which minimize the variation in product response while keeping the mean response on target. By setting those factors at their optimal levels, the product can be made robust to changes in operating and environmental conditions. Thus, more stable and higher-quality products can be obtained, and this is achieved during Taguchi’ parameter-design stage by removing the bad effect of the cause rather than the cause of the bad effect. Furthermore, since the method is applied in a systematic way at a pre-production stage (off-line), it can greatly reduce the number of time-consuming tests needed to determine cost-effective process conditions, thus saving in costs and wasted products 1.5 8-Steps in Taguchi Methodology 1. Identify the main function, side effects and failure mode. 2. Identify the noise factor, testing condition and quality characteristics. 3. Identify the objective function to be optimized. 4. Identify the control factor and their levels. 5. Select the Orthogonal Array, Matrix experiments. 6. Conduct the Matrix equipment. 7. Analyze the data; predict the optimum levels and performance. 8. Perform the verification experiment and plan the failure action.
  • 11. 11 CHAPTER 2 2.1 Taguchi Loss Function Genichi Taguchi has an unusual definition for product quality: “Quality is the loss a product causes to society after being shipped, other then any losses caused by its intrinsic functions.” By “loss” Taguchi refers to the following two categories. • Loss caused by variability of function. • Loss caused by harmful side effects. An example of loss caused by variability if function would be an automobile that does not start in cold weather. The car’s owner would suffer a loss if he or she had to pay some to start a car. The car owners employer losses the services of the employee who is now late for work. An example of a loss caused by a harmful side effect would be frost by it suffered by the owner of the car which would not start. An unacceptable product which is scrapped or rework prior to shipment is viewed by Taguchi as a cost to the company but not a quality loss.
  • 12. 12 2.1.1 Comparing The Quality Levels of SONY TV Sets Made in JAPAN and in SAN DIEGO The front page of the Ashi News on April 17,1979 compared the quality levels of Sony color TV sets made in Japanese plants and those made in San Diego, California, plant. The quality characteristic used to compare these sets was the color density distribution, which affect color balance. Although all the color TV sets had the same design, most American customers thought that the color TV sets made in San Diego plant were of lower quality than those made in Japan. The distribution of the quality characteristic of these color TV sets was given in the Ashi News (shown in Figure).The quality characteristics of the TV sets from Japanese Sony plants are normally disturbed around the target value m. If a value of 10 is assigned to range of the tolerance specifications for this objective characteristic, then the standard deviation of this normally distributed curve can be calculated and is about 10/6. In quality control, the process capability index(Cp) is usually defined as the tolerance specification divided by 6 times the standard deviation of the objective characteristic: Cp=Tolerance/6*Standard deviation
  • 13. 13 Therefore, the process capability index of the objective characteristic of Japanese Sony TV sets is about 1. In addition, the mean value of the distribution of these objective characteristics is very close to the target value of m. On the other hand, a higher percentage of TV sets from San Diego Sony are within the tolerance limits than those from Japanese Sony. However, the color density of San Diego TV sets is uniformly distributed rather than normally distributed. Therefore, the standard deviation of these uniformly distributed objective characteristics is about 1/√12 of the tolerance specification. Consequently, the process capability index of the San Diego Sony plant is calculated as follows: Cp=Tolerance/6(Tolerance/√12) = 0.577 It is obvious that the process capability index of San Diego Sony is much lower than that of Japanese Sony. All products that are outside of the tolerance specifications are supposed to be consider defective and not shipped out of the plant. Thus products that are within tolerance specifications are assumed to be pass and are shipped. As a matter of fact, tolerance specifications are very similar to tests in schools, where 60% is usually the dividing line between passing and failing, and 100% is ideal score. In our example of ideal TV sets, the ideal consideration is that the objective characteristics, color density, meet the target value m. The more the color density deviates from the target value, the lower the quality level of TV set. If the deviation of color density is over the tolerance specifications, m ±5, a TV set is considered defective. In the case of school test, 59% or below is failing, while 60% or above is passing. Similarly, the grades between 60% and 100% in evaluating quality can be classified as follows: 60%-69% Passing (D) 70%-79% Fair (C) 80%-89% Good (B)
  • 14. 14 90%-100% Excellent (A) The “grades” D,C,B and A in parentheses above are quite commonly used in the United States for categorizing the objective characteristics of products. Thus, one can apply this scheme to the classification of the objective characteristics (color density) of these color TV sets as shown in Fig. One can see that a very high percentage of Japanese Sony TV sets are within grade B, and a very low percentage are within or below grade D. In comparison, the color TV sets from San Diego SONY have about the same percentage in grades A, B and C. To reduce the difference in process capability indices between Japanese SONY and San Diego SONY, (and thus seemingly increase the quality level of the San Diego sets) the letter tried to tighten the tolerance specification to extend only to grade C shown in Fig. rather than grade D. Therefore, Only the products within grades A,B and C were treated as passing. But this approach is faulty, Tightening the tolerance specifications because of a low process capability in a production plant is meaningless as increasing the passing score of school tests from 60% to 70% just because students do not learn well. On the contrary, such a school should consider asking the teachers to lower the passing score for student who do not test as well instead of rating it. The next section will illustrate how to evaluate the functional quality of products meaningfully and correctly. Quadratic Loss Function When an objective characteristic y deviates from its target value m, some financial loss will occur. Therefore, the financial loss, sometimes referred to simply as quality loss or used as an expression of quality level, can be assumed to be a function of y, which we shall designate L(y),When y meets the target m, the of L(y) will be at minimum; generally, the financial loss can be assumed to be zero under this ideal condition: L (m) = 0 ……………… Equation 2.1 Since the financial loss is at a minimum at the point, the first derivative of the loss function with respect to y at this point should also be zero. Therefore, one obtains the following equation: L΄ (m) = 0 …………….. Equation 2.2 If one expand the loss function L(y) through a Taylor series expansion around the target value m and takes Equations (2.1) and (2.2) into consideration, one will get the following equation: L(y) = L(m) + L΄ (m)(y-m)/1!+L΄΄(m)(y-m) ²/2!+ …………… =L΄΄(y-m)²/2! +………
  • 15. 15 is result is obtained because the constant term L(m) and the first derivative term L’(m) are both zero. In addition, the third order and higher order term are assumed to be negligible. Thus, one can express the loss function as a squared term multiplied by constant k: L(y) =k(y-m)² …………… Equation 2.3 When the deviation of the objective characteristic from the target value increases, the corresponding quality loss will increase. When magnitude of deviation is outside of the tolerance specifications, this product should be considered defective. Let the cost due to defective product be A and the corresponding magnitude of the deviation from the target value be Δ. Taking Δ into right hand side of Equation (2.3), one can determine the value for constant k by following Equation: k=cost of defective product/(tolerance)² In the case of the SONY colour TV sets, let the adjustment cost be A= 300 Rs, when the colour density is out of the tolerance specifications. Therefore, the value of k can be calculated by the following equation: k = 600/5² = 12 Rs Therefore, the loss function is L(y) = 12(y – m)² This equation is still valid even when only one unit of product is made. Consider the visitor to the BHEL Heavy Electric Equipment Company in India who was told, “In our company, only one unit of product needs to be made for our nuclear power plant. In fact, it is not really necessary for us to make another unit of product. Since the sample size is only one, the variance is zero. Consequently, the quality loss is zero and it is not really necessary for us to apply statistical approaches to reduce the variance of our product.” However, the quality loss function [L = k(y – m)²] is defined as the mean square deviation of objective characteristics from their target value, not the variance of products. Therefore, even when only one product is made, the corresponding quality loss can still be calculated by Equation (2.3). Generally, the mean square deviation of objective characteristics from their target values can be applied to estimate the mean value of quality loss in Equation (2.3). One can calculate the mean square deviation from target σ² (σ² in this equation is not variance) by the following equation (the term σ² is also called the mean square error or the variance of products): Quality of Sony TV sets where the tolerance specification is 10 and the objective function data corresponds to figure
  • 16. 16 Plant Mean Value Loss L Defective Standard of Objective Variation Location Function Deviation (in Rs) Ratio Japan M 10/6 10²/36 33 0.27% San Diego M 10/√12 10²/12 100 0.00 Substituting this equation into Equation (2.3), one gets the following equation: L = kσ² From Equation (2.4), one can evaluate the differences of average quality levels between the TV sets from Japanese Sony and those from San Diego Sony as shown in Table 2.1. From table 2.1 it is clear that although the defective ratio of the Japanese Sony is higher than that of the san Deego Sony, the quality level of the former is 3 times higher than the latter. Assume that one can tighten the tolerance specifications of the TV sets of San Diego Sony to be m +- 10/3. Also assume that these TV sets remain uniformly distributed after the tolerance specifications are tightened. The average quality level of San Diego Sony TV sets would be improved to the following quality level:
  • 17. 17 L = 12[(1/ √12) (10) (2/3)] ² = 44Rs where the value of the loss function is considered the relative quality level of the product. This average quality level of the TV sets of San Diego Sony is 56Rs higher than the original quality level but still 11Rs lower than that of Japanese Sony TV sets. In addition, in this type of quality improvement, one must adjust the products that are between the two tolerance limits,m +- 10/3 and m+- 5, to be within m +- 10/3. In the uniform distribution shown in Figure 2.1, 33.3% would need adjustment, which would cost 300Rs per unit. Therefore each TV set from San Diego Sony would cost an additional 100Rs on average for the adjustment: (300)(0.333) = 100Rs Consequently, it is not really a good idea to spend 100Rs more to adjust each product, which is worth only 56Rs.A better way is to apply quality management methods to improve the quality level of products. 2.2 Variation of the Quadratic Loss Function 1. Nominal the best type: Whenever the quality characteristic y has a finite target value, usually nonzero, and the quality loss is symmetric on the either side of the target, such quality characteristic called nominal-the-best type. This is given by equation L(y) =k(y-m)² ………………………Equation 1 Color density of a television set and the out put voltage of a power supply circuit are examples of the nominal-the-best type quality characteristic. 2. Smaller-the-better type: Some characteristic, such as radiation leakage from a microwave oven, can never take negative values. Also, their ideal value is equal to zero, and as their value increases, the performance becomes progressively worse. Such characteristic are called smaller-the-better type quality characteristics. The response time of a computer, leakage current in electronic circuits, and pollution from an automobile are additional examples of this type of characteristic. The quality loss in such situation can be approximated by the following function, which is obtain from equation by substituting m = 0 L(y) =ky² This is one side loss function because y cannot take negative values.
  • 18. 18 3. Larger-the-better type: Some characteristics, such as the bond strength of adhesives, also do not take negative values. But, zero is there worst value, and as their value becomes larger, the performance becomes progressively better-that is, the quality loss becomes progressively smaller. Their ideal value is infinity and at that point the loss is zero. Such characteristics are called larger-the-better type characteristics. It is clear that the reciprocal of such a characteristics has the same qualitative behavior as a smaller-the-better type characteristic. Thus we approximate the loss function for a larger-the-better type characteristic by substituting 1/y for y in equation1: L(y) = k [1/y²] 4. Asymmetric loss function: In certain situations, deviation of the quality characteristic in one direction is much more harmful than in the other direction. In such cases, one can use a different coefficient k for the two directions. Thus, the quality loss would be approximated by the following asymmetric loss function: k(y-m) ²,y>m L(y) = k(y-m) ², y≤m
  • 19. 19 CHAPTER 3 Introduction to Analysis of variation 3.1 Understanding variation The purpose of product or process development is to improve the performance characteristics of the product or process relative to customer’s needs and expectations. The purpose of experimentation should be to reduce and control variation of a product or a process; subsequently decisions must be made which parameters affect the performance of a product/process. Since variation is a large part of the discussion relative to quality, analysis of variation (ANOVA) will be the statistical method used to interpret experimental data and make necessary decisions. 3.2 What is ANOVA ANOVA is a statistically based decision tool for detecting any differences in average performance of groups of items tested. ANOVA is a mathematical technique which breaks total variation down into accountable sources; total variation is decomposed into its appropriate components We will start with a very simple case and then build up more comprehensive situations Thereafter, we will apply ANOVA to some very specialized experimental situations 3.2.1 No way ANOVA Imagine an engineer is sent to the production line to sample a set of windshield pumps for the purpose of measuring flow rate. The data collected is as under: Pump No. 1 2 3 4 5 6 7 8 Flow rate 5 6 8 2 5 4 4 6 oz/min. 1 oz/min. = 0.473 ml/s
  • 20. 20 No-way ANOVA breaks total variation down into only two components 1. The variation of the average (or mean) of all the data points relative to zero 2. The variation of individual data points around the average (traditionally called experimental error) The notations used in the calculation method are as under: y = observation or response or simply data y i = ith response for example y3 = 8 oz/min N = Total number of observations T = Sum of all observations T = Average of all observations = T/N = y In this case N = 8, T = 40 oz/min , and T = 5.0 oz/min What is the reason for variation from pump to pump? The true flow rate is actually unknown; it is only estimated through the use of some flow meter. There will be some unknown measurement error present, but flow rate will nonetheless be observed and accepted as the pump’s performance under the conditions of the test. Also, the pumps were randomly selected. Although the manufacturer produces identical pumps; however there will be slight differences from pump to pump, causing a pump to pump variation in performance. (This is natural variation of the process) It is for this reason the flow rates of pumps are not identical. No-way ANOVA can be illustrated graphically. (Notes of this section must be taken by hand) The magnitude of each observation can be represented by a line segment extending from zero to the observation. These line segments can be divided into two portions: - One portion attributed to the mean; - Other portion attributed to the error The error includes the natural process variation and the measurement error. The magnitude of the line segment due to the mean is indicated by extending a line from the average value to zero. - The magnitude of line segment due to error is indicated by the difference of the average value from each observation.
  • 21. 21 To calculate the total variation present we will do a mathematical operation which will allow a clearer picture to develop. The magnitudes of each of the line segments can be squared and then summed to provide a measure of the total variation present SS T = total sum of squares = 5 2 + 6 2 + 8 2 + − − − − − + 6 2 SS T = 222.0 The magnitude of line segment due to mean can also be squared and summed SS m = sum of squares due to mean = N (T ) 2 T  2 T2 40 2 But T = T / N SS m = N   = = = 200.0 N N 8 The portion of the magnitude of the line segment due to error can be squared and summed to provide measure of the variation around the average value n SSe = error sum of squares = ∑i =1 ( yi − y ) 2 SS e = 0 + 1 + 3 + (−3) + 0 2 + (−1) 2 + (−1) 2 + 12 2 2 3 2 SSe = 22.0 Note that 222.0 = 200.0 + 22.0 This demonstrates a basic property of ANOVA that the total sum of squares is equal to the sum of squares due to known components SST = SSm + SSE The formulas for the sum of squares can be written generally N SS T = ∑ y i2 i =1 SS m = T2 N SS = ∑ ( y − T ) N 2 e i I =1 In the above example the error value was calculated, but it is not necessary as SSe = SST - SSm
  • 22. 22 3.2.1.1 Degrees of Freedom (dof) To complete ANOVA calculations, one other element must be considered i.e. Degrees of Freedom. The concept of dof is to allow 1 dof for each independent comparison that can be made in the data. Only 1 independent comparison can be made between the mean of all the data (There is only 1 mean). Therefore, only 1 dof is associated with the mean. The concept of dof also applies to the dof associated with the error estimate. With reference to 8 observations, there are 7 independent comparisons that can be made to estimate the variation in data. Data point 1 can be compared to 2, 2 to 3, 3 to 4 etc. which are 7 independent comparisons. One of the questions an instructor dreads most from an audience is, "What exactly is degrees of freedom?" It's not that there's no answer. The mathematical answer is a single phrase, "The rank of a quadratic form." It is one thing to say that degrees of freedom is an index and to describe how to calculate it for certain situations, but none of these pieces of information tells what degrees of freedom means. At the moment, I'm inclined to define degrees of freedom as a way of keeping score. A data set contains a number of observations, say, n. They constitute n individual pieces of information. These pieces of information can be used either to estimate parameters (mean) or variability. In general, each item being estimated costs one degree of freedom. The remaining degrees of freedom are used to estimate variability. All we have to do is count properly. A single sample: There are n observations. There's one parameter (the mean) that needs to be estimated. That leaves (n-1) degrees of freedom for estimating variability.
  • 23. 23 Two samples: There are n1 + n2 observations. There are two means to be estimated. That leaves (n1 + n2 − 2) degrees of freedom for estimating variability. Let v = dof vt = Total dof vm = dof associated with mean (always 1 for each sample) ve = dof associated with error Then vt = vm + ve 8=1+7 The total dof equals total number of observations in the data set for this method of ANOVA Summary of No-way ANOVA table: Source SS Dof Mean 200 1 Error 22 7 Total 222 8 One other statistic calculated from ANOVA is variance V. Error variance or just variance is SS e 22 Ve = = = 3.14 ve 7 Also, sample standard deviation s = V Where N ∑( y − y) 2 i s= i =1 N −1 ∑ ( y − y ) = variance i 2 s2 =V = N −1 Which is essentially SS Ve = e ve
  • 24. 24 Although the formula above is faster than ANOVA for calculating error variance in this case, but when the experimental situations become more complex ANOVA will become faster method. Error variance is a measure of the variation due to all the uncontrolled parameters, including measurement error involved in a particular experiment (set of data collected). Which is essentially the natural variation of a process 3.2.2 One – Way ANOVA This is next most complex ANOVA to conduct. This situation considers the effect of one controlled parameter upon the performance of product or process, in contrast to no-way ANOVA, where no parameters were controlled. Again let us try to solve this problem through an imaginary, yet potentially real situation. Imagine the same engineer who took samples for flow rate of windshield pumps is charged with the task of establishing the fluid velocity generated by the windshield washer pumps. This means when the fluid velocity is too low, the fluid will merely dribble out, and if too high the air movement past the windshield will not be able to distribute the cleaning fluid adequately to satisfy the car driver. The engineer proposes a test of three different orifice areas to determine which give a proper fluid velocity. Before the test data is collected some notation in order to simplify the mathematical discussion is: A = Factor under investigation (outlet orifice area) A1 = Ist level of orifice area = 0.0015 sq. in A2 = 2nd level of orifice area = 0.0030 sq. in A3 = 3rd level of orifice area = 0.0045 sq. in The same symbol for the level designations will be used to denote the sum of responses Ai = sum of observations under Ai level Ai = Average of observations under Ai level = Ai / n Ai T = sum of all observations T = Average of all observations = T/N n Ai = number of observations under Ai level k A = number of levels of factor A
  • 25. 25 With notation in mind, the engineer constructs four pumps with each given orifice area (making 12 to test for three levels) The test data is as under: Level Area sqin Velocity Ft/s Total A1 0.0015 2.2 1.9 2.7 2.0 8.8 A2 0.0030 1.5 1.9 1.7 -* 5.1 A3 0.0045 0.6 0.7 1.1 0.8 3.2 Grand Total 17.1 * Dropped pump and destroyed it, no data A1 = 8.8 ft/s n A1 = 4 A1 = 2.2 ft/s A2 = 5.1 ft/s n A2 = 3 A2 = 1.7 ft/s A3 = 3.2 ft/s n A3 = 4 A3 = 0.8 ft/s kA = 3 T = 17.1 ft/s N = 11 T = 1.6 ft/s Sum of squares (One-way ANOVA) Two methods can be used to complete the calculations - Including the mean - Excluding the mean Method 1 (Including the mean) As before the total variation can be decomposed into its appropriate components: - The variation of the mean of all observations relative to zero – VARIATION DUE TO MEAN - The variation of the mean of observations under each factor level around the average (mean) of all observations –VARIATION DUE TO FACTOR A - The variation of individual observations around the average of observations under each level – VARIATION DUE TO ERROR The calculations are identical to No-way ANOVA example, except for the component of variation due to factor A, outlet orifice area. N SS T = ∑ y i2 = 2.2²+1.9²+ -----------+0.8² = 31.90 i =1 T 2 = 17.1²/11 = 26.583 SS m = N (T ) 2 = N Graphically, also this can be shown (Pl make hand notes here for graphical representation)
  • 26. 26 The magnitude of segments for each level of factor A is squared and summed. For ( instance, the length of the line segment due to level A1 is A1 − T . ) There are four observations under A1 condition. The same type of information is collected for other levels of factor A. SS A = n A 1 ( A1 − T ) + n A 2 ( A2 − T ) + n A3 ( A3 − T ) 2 2 2 SS A = 4(0.64545)² + 3(0.14545) ² + 4(-0.75454) ² = 4.007 The above calculation is tedious and is mathematically equivalent to:  k A  Ai2  T 2 SS A = ∑    −   i =1  n Ai   N  8.8 2 5.12 3.2 2 17.12 = 4.007 which is same as above SS A = + + − 4 3 4 11 The variation due to error is given by 2 SS e = ∑ ∑ ( y i − A j ) k A n Ai j =1 i =1 SSe = 02 + ( − 0.3) + 0.52 + ( −0.2) 2 + − − − − 0.32 + 02 2 SS e = 0.600 Error variation is again based on method of least squares but in one way ANOVA the least squares are evaluated around the average of each level of the controlled factor. Error variation is the uncontrolled variation within the controlled group. Again the total variation is SST = SSm + SSA + SSe 31.190 = 26.583 + 4.007 + 0.600 Dof (Including the mean) v t = vm + v A + ve v t = 11, vA = k A − 1 = 2
  • 27. 27 ve = 11 – 1 – 2 = 8 One way ANOVA summary (Method 1) Source SS Dof v Variance V Mean m 26.583 1 26.583 Factor A 4.007 2 2.004 Uncontrolled Error e 0.600 8 0.075 Total 31.190 11 In the above table we have been able to estimate variance for both factor A and uncontrolled error. This is what will be of interest to us when we design experiments. Also, if look at the calculations done for Method 1, you will observe that mean does not affect the calculations for the variation due to factor A and error in any way. Thus in most experimental situations, mean has no practical value with the exception of ‘lower is better’ situations where the variation due to mean is a measure of how far the average is from zero and how successful the factor might be in reducing the average to zero. Method 2: When we exclude the mean from ANOVA calculations, the total variation is then calculated as: - The variation of average of observations under each factor level around the average of all observations - The variation of the individual observations around the average of observations under each factor level Again graphically this can be demonstrated: The same concept of summing the squares of the magnitudes of various line segments is applied in method 2 as well. N SST = ∑ ( yi − T ) 2 = 4.607 i =1 Mathematically this is equivalent to N T2 SST = ∑ ( yi ) −2 i =1 N
  • 28. 28 This expression will be used to define the total variation by this method. See this equivalent to ( SS T − SS m ) from previous calculations The variation due to factor A and uncontrolled error is calculated identically as in Method 1  k A  Ai2  T 2 SS A = ∑    −   i =1  n Ai   N  k A n Ai 2 SS e = ∑ ∑ ( y i − A j ) j =1 i =1 Dof (Excluding the mean) In method 1, dof was vt = vm + v A + ve Where vm = 1 (always) and vt =N In Method 2(excluded mean), the dof for mean is subtracted from both sides of the above equation So v t = N – 1 = 11 – 1 = 10 v t = vA + ve 10 = ( k A - 1) + ve so ve = 8 One way ANOVA summary (Method 2) Source SS Dof v Variance V Factor A 4.007 2 2.004 Uncontrolled Error e 0.600 8 0.075 Total 4.607 10 The values of variance for factor A and Error in both methods are identical. The value of mean is disregarded in method 2 and is the most popular method.
  • 29. 29 Only when the performance parameter is ‘lower is better’ characteristic would the variance due to mean be relevant; this provides a measure of how effective some factor might be in reducing the average to zero. Let us sum up the above discussion Define three "Sums of Squares" Total Corrected Sum of Squares (SST) • Squared deviations of observations from overall average Error Sum of Squares (SSE) • Squared deviations of observations from treatment averages Treatment Sum of Squares (SStrt) • Squared deviations of treatment averages from overall average (times n) Dot Notation a n y.. = ∑ ∑ i =1 j =1 yij y.. = y../N (the overall average) n yi. = ∑ yij j =1 yi. = yi./n (the average within Treatment i) a n Raw SS = ∑ ∑ i =1 j =1 yij2 Total Corrected SS a n SST = ∑ ∑ ( yij - yi. )2 i =1 j =1 This measures the overall variability in the data. SST/(N-1) is just the sample variance of the whole dataset DECOMPOSITION OF SS I will do (hopefully) derivation of the following equation:
  • 30. 30 SST = SStrt + SSE a n 2 a n 2 SS = ∑ ∑  y − y  = ∑ ∑  y − y + y − y      T  ij   ij i. i.  i =1 j =1 i =1 j = 1 a n   2 = ∑ ∑   y − y  +  y − y    ij i.  i.     i = 1 j =1 a n 2 a n 2 a n   = ∑ ∑  y − y  + ∑ ∑  y − y  + ∑ ∑ 2  y − y  y − y     ij i.  i.   ij i.  i.     i = 1 j =1 i =1 j =1 i = 1 j =1 a 2 a n   = SS + ∑ n y − y  + ∑ ∑ 2  y − y  y − y    E  i.   ij i.  i.  i =1 i =1 j =1 a n   = SS + SS + 2  y − y  y − y    E TRT ∑ ∑   ij i.  i.   i =1 j =1 = SS + SS +0 E TRT Must show last term is zero a n   a n ∑ ∑ 2  yij − yi.  yi. − y   = 2 ∑  yi. − y  ∑  yij − yi.              i =1 j =1 i =1 j =1       a   n   = 2 ∑ y − y ∑ y − ny            i.   ij i.         i =1   j =1     a a = 2 ∑  y − y ny − ny  = 2 ∑  y − y 0 = 0  i.  i. i.   i.   i =1 i =1 Two –Way ANOVA Two-way ANOVA is the next highest order of ANOVA to review There are two controlled parameters in this experimental situation Let us consider an experimental situation. A student worked at an aluminum casting foundry which manufactured pistons for reciprocating engines. The problem with the process was how to attain the proper hardness (Rockwell B) of the casting for a particular product. Engineers were interested in the effect of copper and magnesium content on casting hardness.
  • 31. 31 According to specs the copper content could be 3.5 to 4.5% and the magnesium content could be 1.2 to 1.8%. The student runs an experiment to evaluate these factors and conditions simultaneously. If A = % Copper Content A1 = 3.5 A2 = 4.5 If B = % Magnesium Content B1 = 1.2 B2 = 1.8 The experimental conditions for a two level factors is given by 2 f = 4 which are A1 B1 A1 B2 A2 B1 A2 B2 Imagine, four different mixes of metal constituents are prepared, casting poured and hardness measured. Two samples are measured from each mix for hardness. The result will look like: A1 A2 B1 76, 78 73, 74 B2 77, 78 79, 80 To simplify discussion 70 points from each value are subtracted in the above observations from each of the four mixes. Transformed results can be shown as: A1 A2 B1 6, 8 3, 4 B2 7, 8 9, 10 Two way ANOVA calculations: The variation may be decomposed into more components: 1. Variation due to factor A 2. Variation due to factor B 3. Variation due to interaction of factors A and B 4. Variation due to error An equation for total variation can be written as SS T = SS A + SS B + SS A×B + SS e A x B represents the interaction of factor A and B. The interaction is the mutual effect of Cu and Mg in affecting hardness. Some preliminary calculations will speed up ANOVA
  • 32. 32 A1 A2 Total B1 6, 8 3, 4 21 B2 7, 8 9, 10 34 Total 29 26 55 Grand Total A1 = 29 A2 = 26 B1 =21 B2 =34 T = 55, N=8 n A1 = 4 nB1 = 4 n A2 = 4 nB 2 = 4 Total Variation N T2 SST = ∑ ( yi ) − 2 i =1 N 55 2 = 6² + 8² + 3² + ----------- + 10² - = 40.875 8 Variation due to factor A  k A  Ai2  T 2 SS A = ∑    n  − N  i =1  Ai    2 2 2 A1 A2 Ak T2 + + −− − − − − − + − n A1 n A2 n Ak N 29 2 26 2 552 SS A = + − = 1.125 4 4 8 Please carry out a mathematical check here which is Numerator 29 + 26 = 55 and Denominator 4 + 4 = 8 If these conditions are not met then the SS A calculation will be wrong. For a two level experiment, when the sample sizes are equal, the equation above can be simplified to this special formula:
  • 33. 33 ( A1 − A2 ) 2 ( 29 −26) 2 SS A = = = 1.125 N 8 Similarly the variation due to factor B SS B = (B1 −B2 )2 = ( 21 − ) 2 34 =21.125 N 8 To calculate the variation due to interaction of factors A and B Let ( A × B) i represent the sum of data under the ith condition of the combination of factor A and B. Also let c represent the number of possible combinations of the interacting factors and n( A×B )i the number of data under this condition.  c  ( A ×B ) 2  T 2 SS A×B ∑ n =   i  −  N −SS A −SS B i =  ( A×B ) i  1  Note that when the various combinations are summed, squared, and divided by the number of data points for that combination, the subsequent value also includes the factor main effects which must be subtracted. While using this formula, all lower order interactions and factor effects are to be subtracted. For the example problem: A1 B1 = 14, A1 B2 = 15, A2 B1 = 7 A2 B2 = 19 And no. of possible combinations c = 4 And since there are two observations under each combination n( A×B )i = 2 14 2 7 2 15 2 19 2 55 2 SS A× B = + + + − − SS A − SS B = 15.125 2 2 2 2 8 Since SS T = SS A +SS B +SS A×B +SS e SS e = 40.875 −1.125 −21.125 −15.125 = 3.500
  • 34. 34 Degrees of Freedom (Dof) – Two way ANOVA vt = v A + vB + v A×B + ve vt =N–1 =8–1=7 vA = k A -1=1 vB = k B -1=1 v A×B = (v A )(vB ) = 1 ve = 7 − 1 − 1 − 1 = 4 ANOVA summary Table (Two-way) Source SS Dof v Variance V F A 1.125 1 1.125 1.29 B 21.125 1 21.125 24.14* AxB 15.125 1 15.125 17.29** E 3.500 4 0.875 Total 40.875 7 * at 95% confidence ** at 90% confidence The ANOVA results indicate that Cu by itself has no effect on the resultant hardness, magnesium has a large effect (largest SS) on hardness and the interaction of Cu and Mg plays a substantial part in determining hardness. A plot of these data can be seen. (Take hand notes here) In this plot there exist non parallel lines which indicate the presence of an interaction. The factor A effect depends on the level of factor B and vice versa. If the lines are parallel, there would be no interaction which means the factor A effect would be same regardless of the level of factor B.
  • 35. 35 Hardn ess B2 10 8 6 4 B1 2 A1 A2 Geometrically, there is some information available from the graph that may be useful. The relative magnitudes of the various effects can be seen graphically. The B effect is the largest, A x B effect next largest and the A effect is very small. See 29 26 A1 = = 7.1 A2 = = 6.5 4 4 21 34 B1 = = 5.2 B2 = = 8.5 4 4
  • 36. 36 Hardn ess 10 B2 8 B effect A effect 6 Mid pt. A2B1 & A2B2 Mid pt. on line B1 A x B effect B1 4 2 A1 A2 So by plotting the data for each factor we could observe the following: (Make hand notes here) In the first case, there is no interaction because the lines are parallel In 2nd case, some interaction In 3rd case, there is a strong interaction 3.3 EXAMPLE OF ANOVA During the late 1980s, Modi Xerox had a large base of customers (50 thousand) for this model spread over the entire country. Many buyers of these machines earn their livelihood by running copying services. Each of these buyers ultimately serves a very large number of customers (end user). When copy quality is either poor or inconsistent, customers earn a bad name and their image and business gets affected. In the late 1980s, the company integrated the total quality management philosophy into its operation and placed the highest focus on customer satisfaction—any problem of field failure was given the highest priority for investigation. The problem of skips was subjected to
  • 37. 37 detailed investigation by a cross-functional team from the Production, Marketing, and Quality Assurance Departments. Problem Description The pattern of blurred images (skips) observed in the copy is shown in Figure above. It usually occurs between 10-60 mm from the lead edge of the paper. Sometimes, on a photocopy taken on a company letterhead paper, the company logo gets blurred, which is not appreciated by the customer. This problem was noticed in only one-third of the machines produced by the company, with the remaining two-thirds of machines in the field working well without this problem. The in-house test evaluation record also confirmed the problem in only about 15% of the machines produced. Data analysis indicated that not all the machines produced were faulty. Therefore, the focus of further investigation was to find out what went wrong in the faulty machines or whether there are basic differences between the components used in good and faulty machines. Preliminary Investigation A copier machine consists of more than 1000 components and assemblies. A brainstorming session by the team helped in the identification of 16 components suspected to be responsible for the problem of blurred images. Each Suspected component had at least two possible dimensional characters which could have resulted in the skip symptom. This led to more than 40 probable causes (40 dimensions arising out of 16 components) for the problem. An attempt was made by the team to identify the real causes among these 40 probable causes. Ten bad machines were stripped open and various dimensions of these 16 component were measured. It was observed that all the dimensions were well within specifications Hence, this investigation did not give any clues
  • 38. 38 to the problem. Moreover, the time and effort spent in dismantling the faulty machines and checking various dimensions in 16 components was in vain. This gave rise to the thought that conforming to specification does nor always lead to perfect quality. The team needed to think beyond the specification in order to find a solution to the problem Taguchi Experiment An earlier brainstorming session had identified 16 components that were likely to be the cause of this problem. A study of travel documents of 300 problem machines revealed that on 88% of occasions, the problem was solved by replacing one or more of only four parts of the machine. These four parts were from the list of 16 parts identified earlier. They were considered to be Critical and it was decided to conduct an experiment on these four parts. These parts were the following: (a) Drum shaft (b) Drum gear (c) Drum flange (d) Feed shaft Two sets of these parts Were taken for Experiment I, one from an identified problem machine and one from a problem free machine. The two levels considered in the experiment were good and bad; ‘good’ signifying parts from the problem-free machine and ‘bad’ signifying parts from the problem machine. The factors and levels thus identified are given in Table below. A full factorial experiment would have required 16 trials while the experiment was designed in L8(27) fractional factorial using a linear graph and orthogonal array (OA) table developed by Taguchi.
  • 39. 39 The linear graph is presented in Fig. 9.24 and the layout in Table 9.14. A master plan for conducting eight experiments was prepared. The response considered was the number of defective copies (copies exhibiting the skip problem) in a 50-copy run. The master plan along with the response is presented in Table 9.15.
  • 40. 40 Analysis and Results The response considered was fraction defective (p5 d/n). Data were normalized by the transformation sin ‾ ¹ (p½). Analysis of variance (ANOVA) was performed on normalized data and the results are presented in Table . F(1,.5) at 0.05 = 6.61, F(1,.5) at 0.01=16.26. pA=(3528-32.4)*100/3788 =923 As can be seen from Table factor A is highly significant (the only significant factor), explaining 92.3% of the total variation. In other words, of the four components studied, the drum shaft alone is the source of trouble for skip. The problem was now narrowed down to one component from the earlier list of 16 components, giving a ray of hope for moving towards a solution. Further investigations were carried out on drum shaft design. Drum Shaft Design
  • 41. 41 The configuration of the drum shaft is defined by 15 dimensions. A brainstorming session by the team members identified wobbling and increased play in the drum shaft as major causes for this problem. Four dimensions of the drum shaft were suspected to be causing wobbling and excessive play. These dimensions in all 20 machines (10 good and 10 bad) were checked and found to be well within specification Now the question as to where the problem lay arose-definitely not within the specification, perhaps outside the specification? This led u to think beyond the specification in order to find a solution. As a first step, the dimension patterns of good and bad machines were compared. The dimension patterns for four critical dimensions suspected to be the cause of the problem are show ft in Figure below. There is not much difference in pattern between good and bad machines with respect to dimensions B, C, and D. Dimension A, that is, diameter over pin (DOP) dimensions of the drum shaft splines revealed a difference in pattern between the good and bad machines. The DOP of shafts from 10 problem machines were found to lie in the lower half of the specification range, whereas in case of problem-free machines, the DOP was found to be always on the
  • 42. 42 upper half of the specification range(shown in fig. above). The DOP dimension in the drum shaft is shown in Figure below. DOP (diameter over pin) is a measure of the tooth thickness, t. Higher DOP means greater tooth thickness of the splines and vice versa. If the DOP in the splines of the drum shaft is on the lower side, then it will increase the clearance, resulting in more play between the drum shaft and the drum gear assembly .
  • 43. 43 Here, the image of the original document is transferred on the photoreceptor drum through a series of lenses and mirrors. The photoreceptor drum is coated with a photo-conductor material and it is electrically charged with positive charge. During the transfer of the image from the document, the whole of the drum area is exposed to light except the area where the image is formed, Due to the exposure to light, the photo-conductor material becomes a conductor and the charge is neutralized, except in the image area. This image is called ‘latent image’. Subsequently, this image is transferred to paper through toner and developer. During the transfer of image, the drum should rotate at a uniform speed. Any jerk to the photoreceptor drum during rotation will cause distortion or blur on the latent image. The photoreceptor drum is driven by the drum shaft anti drum gear assembly. An excessive play between drum shaft and drum gear gets magnified and produces jerks in the photoreceptor unit. Bad machine dimension pattern clearly indicates the possibility of an excessive play between drum shaft and drum gear assembly. A sketch of the Photoreceptor assembly is shown below here.
  • 44. 44 A lower DOP results in Producing a large gap between the drum shaft and drum gear which causes excessive play in the drum shaft. Technically, excessive play between the drum and drive gear can cause the skip problem This theory was further confirmed when this model (X) was compared with model Y and model Z where no skip problem was observed. In models Y and Z, the drum shaft and drive gear were integrated into a single unit. This Probably explains the zero play and no skip defect. The drum and drive gear assembly of the three models X, Y, and Z are shown in figure. for comparison For further validation of this point, play between the drum shaft and drive gear was eliminated by temporarily integrating the system using a drop of araldite (glue) in 50 problem machines. A test run was pen on all 50 machine and no skip defect was observe This led to the conclusion that drum shaft DOP specifications are not fail-safe against skips. It was now felt necessary to arrive at new specifications for DOP to ensure no excessive play between the drum
  • 45. 45 shaft and drive gear. The question arose as to how much play can be permitted. To find an answer, a similar drive system of the very successful two-wheeler Lambretta Scooter was studied, and it was found that the Play varied between 0.04 and 0.07 mm. To be on the safe side, it was decided to allow maximum play of only 0.04 mm between the drum shaft and drive gear. These drum shafts are manufactured by subcontractors, so new specifications were reached by taking into consideration the supplier’ s capability of machining these dimensions and a maximum permissible play of 0.04mm. The old and new specifications for DOP are shown in figure. Confirmatory Trial The implications of the new specifications on other systems of the machine were examined and it was found that the change in specification would not create any problems. The 36 worst-affected machines were selected from the field. Drum shafts with the new specifications were made and then fitted on these machines. Test results of these machines showed a total elimination of skip defects. Ultimately, to give the customer the benefits of the study, 5000 drum shafts with 30 new specifications were made and incorporated in 5000 existing machines with the old design in the field. A sample performance audit of 800 machines (out of those 5000) in the field was carried out and none of these 800 machines indicated skip problems. This provided confidence that the new design had worked successfully. After that, the new design was implemented fully by releasing the new specification. The rate of occurrence of the skip problem in the assembly line dropped from the previous 13% to less than 0.5%. Beating the Benchmark
  • 46. 46 Machine specifications released from Rank Xerox (UK) permit the occurrence of skip up to 10mm from the lead edge. Earlier specifications of Modi Xerox permitted the occurrence of skip up to 60 mm from the lead edge, but, to most of the customers, loss of information near the lead edge is not acceptable, as the company logos are located near the lead edge of the letterheads. The exercise was initially taken up to reach the standard of Rank Xerox (skip up to 10mm). Now, the modified design of the drum shaft, evolved through scientific and systematic investigation, has completely eliminated the skip and hence has surpassed even the Rank Xerox benchmark of permitting skip up to 10 mm from the lead edge. This is a great accomplishment towards skip-free copy. A problem is completely solved for which no solution was previously available worldwide. CHAPTER 4 4.1 WHAT IS ARRAY An Array’s name indicates the number of rows and column it has , and also the number of levels in each of the column. Thus the array L4(2³) has four rows and three 2-level column. 4.2 HISTORY OF ORTHOGONAL ARRAY Historically, related methods was developed for agriculture, largely in UK, around the second world war ,Sir R.A.Fisher was particularly associated with this work . F1 F2 F3 F4 Here the fields area has been divided I1 up into rows and column and four fertilizers (F1-F4) and four irrigation levels are I2 Represented. Since all combination are I3 Taken ,sixteen ‘cells’ or ‘plots’ result. I4
  • 47. 47 The Fisher field experiment is a full factional experiment since all 4*4 =16 combinations of the experiments factors ,fertilizer and irrigation level ,are included. The number of combinations required may not be feasible or economic. To cut down on the number of experimental combinations included, a Latin Square design of experiment may be used. Here there are three fertilisers, three irrigation levels and three alternative additives (A1-A2) but only nine instead of the 3*3*3 =27 combination, of the full factorial are included. There are ‘pivotal’ combinations, however, that still allow the F F2 F3 identification of the best fertiliser, Irrigation level I1 A1 A2 A3 and additive provided that these no serious non-additives or interactions in the relationship I2 A2 A3 A1 between yield and these control factors. The property of Latin Squares A3 A1 A2 that corresponds to this separability is that each I3 of the labels A1,A2,A3 appears exactly once in each row and column. Difference from agricultural applications is that in agriculture the ‘noise’ Or uncontrollable factors that disturb production also tend to disturb experimentation, such as the weather. In industry, factors that disturb production, or are uneconomic to control in production, can and should be directly manipulated in test. Our desire is to identify a design or line calibration which can best survive the transient effects in the manufacturing process caused by the uncontrolled factors. We wish to have small piece to piece and time to time variations associated with this noise variation. To do this we can force diversity on noise conditions by crossing our orthogonal array of controllable factors by full factorial or orthogonal array of noise factors. Thus in the example, we evaluate our product for each of the nine trials against the background of four different combinations of noise conditions. We are looking one of the nine rows of control factors combinations, or for one of the ‘missing’ 72 rows ({3*3*3*3}=81; 81-9=72), which not only gives the correct mean result on average, but also minimises variation away from the mean. To do this Taguchi introduces the signal-to-noise ratio. 4.3 Introduction to Orthogonal Arrays Engineers and Scientists are often faced with two product or process improvement situations. One development situation is to find a parameter that will improve some performance characteristic to an acceptable and optimum value. This is the most typical situation in most organizations. A 2nd situation is to find a less expensive, alternate design, material, or method which will provide equivalent performance
  • 48. 48 When searching for improved or equivalent deigns, the person typically runs some tests, observe some performance of the product and makes a decision to use or reject the new design. In order to improve the quality of this decision, proper test strategies are utilized. Before describing the OA’s let us look at some other test strategies: Most common test plan is to evaluate the effect of one parameter on product performance. This is what is typically called as one factor experiment. This experiment evaluates the effect of one parameter while holding everything else constant The simplest case of testing the effect of one parameter on performance would be to run a test at two different conditions of that parameter. For example: the effect of cutting speed on the finish of a machined part. Two different cutting speeds could be used and the resultant finish measured to determine which cutting speed gave better results. If the first level, the first cutting speed, is symbolized by 1 and the second level by 2, the experimental conditions will look like this: Trial No. Factor Level Test Results 1 1 *,* 2 2 *,* The * symbolizes the value of finish that would be obtained. This sample of two (in this case) could be averaged and compared to the second test. If there happens to be an interaction of this factor with some other factor then this interaction cannot be studied. Several Factors One at a Time If this doesn’t work, the next progression is to evaluate the effect of several parameters on product performance one at a time. Let us assume the experimenter has looked at four different factors A, B, C and D each evaluated one at a time. The resultant test program may appear like the table below: Factor and Factor Level Test Results Trial No. A B C D 1 1 1 1 1 *,* 2 2 1 1 1 *,* 3 1 2 1 1 *,* 4 1 1 2 1 *,* 5 1 1 1 2 *,*
  • 49. 49 One can see that the first trial is the base line condition and results of trial 2 can be compared to trial 1 to estimate the effect of factor A on product performance. Similarly results of trial 3 can be compared to trial 1 to estimate the effect of factor B and so on. The main limitation of several factors one at a time is that no interaction among the factors studied can be observed. Also, the strategy makes limited use of data when evaluating factor effects. Of the ten data points we had in the above example, only two were used to compare against two others; and the remaining six data points were temporarily ignored. If we try to use all the data points, then the experiment will not remain orthogonal. One main requirement of orthogonality is a balanced experiment which means equal number of samples under various test conditions. (Equal no. of tests under A1 and A2) For instance, in the above experiment if all the data under A1 and A2 is averaged and compared, then this is not a fair comparison of A1 to A2. Of the four trials under level A1, three were level B1 and one at level B2. The one trial under A2 was at level B1. Therefore one cannot see that if factor B has an effect on the performance it will be part of the observed effect of factor A, and vice versa. Only when trial 1 is compared to other trials one at a time are the factor effects orthogonal. Several factors all at the same time The most desperate and urgent situations finds the experimenter evaluating effect of several parameters on performance all at the same time. Here the experimenter hopes that at least one of the changes will improve the situation sufficiently. Factor and Factor Level Test Results Trial No. A B C D 1 1 1 1 1 *,* 2 2 2 2 2 *,* This situation makes separation of main factor effects impossible, let alone any interaction effects.
  • 50. 50 Some factors may be making positive effect and some negative, but we will not get any hint of this information. 4.3.1 Investigating many factors – a case study In most problems, preliminary brainstorming would reveal a large number of factors which may influence the output of the process under study. How are the effects of these factors prioritized? The traditional approach is to - Isolate what is believed to be the most important factor - Investigate this factor by itself, ignoring all others - Make recommendations on changes to this crucial factor - Move on to the next factor and repeat This OFAT (One factor at a time) approach has several critical weaknesses. The factorial approach in which several factors are studied simultaneously in a balanced manner is much better. We will try to understand this through an example. 4.3.1.1 Example A process producing steel springs is generating considerable scrap due to cracking after heat treatment. A study is planned to determine better operating conditions to reduce the cracking problem. There are several ways to measure cracking - Size of the crack - Presence or absence of cracks The response selected was Y: the percentage without cracks in a batch of 100 springs Three major factors were believed to affect the response - T: Steel temperature before quenching - C: carbon content (percent) - O: Oil quenching temperature Levels chosen for the study are: Factor Low (Level 1) High (Level 2) T 1450 °F 1600 °F
  • 51. 51 C 0.5% 0.7% O 70 °F 120 °F Classical approach : OFAT Experiment: Four runs at each level of T with C and O at their low levels Steel Tempt. % springs without cracks Average 1450 61 67 68 66 65.5 1600 79 75 71 77 75.5 Conclusion: Increased T reduces cracks by 10% Problem: How general is this conclusion? Does it depend upon - Quench Temperature? - Carbon Content? - Steel chemistry? - Spring type? - Analyst - Etc.?? Carrying out similar OFAT experiments for C and O would require a total of 24 observations and provide limited knowledge. Factorial Approach: - Include all factors in a balanced design: - To increase the generality of the conclusions, use a design that involves all eight combinations of the three factors. The treatments for the eight runs are given as under: Run C T O 1 0.5 1450 70 2 0.7 1450 70 3 0.5 1600 70 4 0.7 1600 70 5 0.5 1450 120 6 0.7 1450 120 7 0.5 1600 120 8 0.7 1600 120 The above eight runs constitute a FULL FACTORIAL DESIGN. The design is balanced for every factor. This means 4 runs have T at 1450 and 4 have T at 1600. Same is true for C and O. IMMEDIATE ADVANTAGES - The effect of each factor can be assessed by comparing the responses from the appropriate sets of four runs.
  • 52. 52 - More general conclusions - 8 runs rather than 24 runs. The data for the complete factorial experiment are: Run C T O Y 1 0.5 1450 70 67 2 0.7 1450 70 61 3 0.5 1600 70 79 4 0.7 1600 70 75 5 0.5 1450 120 59 6 0.7 1450 120 52 7 0.5 1600 120 90 8 0.7 1600 120 87 The main effects of each factor can be estimated by the difference between the average of the responses at the high level and the average of the responses at the low level. For example to calculate the O main effect: 67 + 61 + 79 + 75 Avg. of responses with O as 70 = = 70.5 4 59 + 52 + 90 + 87 Avg. of responses with O as 120 = = 72 4 So the main effect of O is = 72.0 − 70.5 = 1.5
  • 53. 53 Avg. Y 74 O Main Effect 73 72 71 70 70 O 120 The apparent conclusion is that changing the oil temperature from 70 to 120 has little effect. The use of factorial approach allows examination of two factor interactions. For example we can estimate the effect of factor O at each level of T. At T = 1450 67 + 61 Avg. of responses with O as 70 = = 64.0 2 59 + 52 Avg. of responses with O as 120 = = 55.5 2 So the effect of O is 55.5 – 64 = -8.5 At T = 1600 79 + 75 Avg. of responses with O as 70 = = 77.0 2 90 + 87 Avg. of responses with O as 120 = = 88.5 2 So the effect of O is 88.5 – 77 = 11.5 The conclusion is that at T = 1450, increasing O decreases the average response by 8.5 whereas at T = 1600, increasing O increases the average response by 11.5.
  • 54. 54 That is, O has a strong effect but the nature of the effect depends on the value of T. This is called interaction between T and O in their effect on the response. It is convenient to summarize the four averages corresponding to the four combinations of T and O in a table: O 70 120 Average 1450 64 55.5 59.75 T 1600 77 88.5 82.75 Average 70.5 72 71.25 Respo nse Y 90 O = 120 80 O = 70 70 60 50 1450 T 1600 When an interaction is present the lines on the plot will not be parallel. When an interaction is present the effect of the two factors must be considered simultaneously. The lines are added to the plot only to help with the interpretation. We cannot know that the response will increase linearly. Two way tables of averages and plots for the other factor pairs are: C 0.5 0.7 Average 1450 63.0 56.5 59.75 T 1600 84.5 81.0 82.75 Average 73.75 68.75 71.25
  • 55. 55 Res Y 90 C = 0.5 80 C = 0.7 70 60 50 1450 T 1600 O 70 120 Average 0.5 73 74.5 73.75 C 0.7 68 69.5 68.75 Average 70.6 72 71.25 Conclusions: - C has little effect - There is an interaction between T and O. Recommendations: - Run the process with T and O at their high levels to produce about 90% crack free product (further investigation at other levels might produce more improvement). - Choose the level of C so that the lowest cost is realized. Comparison with OFAT On the basis of the observed data we can see that OFAT approach leads to different conclusions if the factors are considered in the following order: Fix T = 1450 and C = 0.5 and vary O, conclude O=70 is best Run C T O Y 1 0.5 1450 70 67 2 0.7 1450 70 61 3 0.5 1600 70 79 4 0.7 1600 70 75