2. ◦ Introduce the basic concepts of an attribute
measurement systems analysis (MSA).
◦ Understand operational definitions for inspection and
evaluation.
◦ Define attribute MSA terms.
◦ Define Procedure for conducting attribute MSA
◦ Demonstrate trial for conducting attribute MSA
2
3. A measurement systems analysis is an evaluation
of the efficacy of a measurement system.
The purpose of Measurement System Analysis is
to qualify a measurement system for use by
quantifying its accuracy, precision, and stability.
It is applicable to both continuous and attribute
data.
4.
5. Most problematic
measurement system
issues come from
measuring attribute data
in terms that rely on
human judgment such
as good/bad, pass/fail,
etc. This is because it is
very difficult for all
testers to apply the same
operational definition of
what is “good” and what
is “bad.”
6. When, we are not getting any measurement values then
the tool used for this kind of analysis is called Attribute
gage R&R.
The R&R stands for repeatability and reproducibility.
Repeatability : is the variation in measurements obtained
with one measurement instrument when used several
times by one appraiser while measuring the identical
characteristic on the same part.
Reproducibility : It is defined as the variation in the
average of the measurements made by different
appraisers using the same measuring instrument when
measuring the identical characteristic on the same part.
.
7. To evaluate product features and make
accept/reject decisions.
• Mandatory criteria for establishment and
use of operational definitions include:
A) Criteria that can be applied to an object
(or a group of objects) which precisely
describes what is acceptable and
unacceptable.
B) A written description of the process for
collecting data, including the method in
which accept/reject decisions will be made.
C) Review of the accept/reject criteria
with people who will do the inspections to
ensure that the requirements are
understood.
8. Select at least 20 parts to be evaluated during the study.
• At least 5 of the parts should be defective in some way. If larger
sample sizes are used, include at least 25% defective parts.
• Care should be taken when selecting defective parts – If possible
select parts which are slightly beyond the specification limits or
acceptance standards. Label each part with proper identification.
• Three inspectors will evaluate each part thrice (Three trials).
• A fourth person should record the data. Note down the observations
in the form of 1 or 0, 1 is OK, 0 is not ok.
The order of inspections should be randomized after each group of
inspections to minimize the risk that the inspector will remember
previous accept/reject decisions. The inspectors must work
independently and cannot discuss their accept/reject decisions with
each other.
9. Appraiser A A A B B B C C C
Trials i ii iii i ii iii i ii iii
1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1
4 0 0 0 0 0 0 0 0 0
5 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1 1 1
14 1 1 1 1 1 1 1 1 1
15 1 1 1 1 1 1 1 1 1
16 1 1 1 1 1 1 1 1 1
17 1 1 1 1 1 1 1 1 1
18 1 1 1 1 1 1 1 1 1
19 1 1 1 1 1 1 1 1 1
20 1 1 1 1 1 1 1 1 1
• The data recorder may use a table similar to the one given below.
0 Not Ok
1 Ok
10. • Type 1 Errors: when a good part is rejected.
• Type 1 errors increase ‐
• Manufacturing costs. Incremental labor and material expenses
are necessary to re – inspect, repair, or dispose the suspect parts.
• Type 1 errors are also called as “Producer’s Risk” or alpha
errors.
• Type 2 Errors: when a bad part is accepted.
• Type 2 errors may occur
• Perhaps the inspector was poorly trained or rushed through the
inspection and inadvertently overlooked a Small defect on the
part.
• When Type 2 errors occur, defects slip through the containment
net and are shipped to the customer.
• Because Type 2 errors put the customer at risk of receiving
defective parts; customer may raised the complaint!
• Type 2 errors are sometimes called as “Consumer’s Risk”.
• Type 2 errors are also called as “beta” errors.
11. What is effectiveness?
The effectiveness of an inspection process is correct
call!
◦ Correct Call (Cc):- The number of times of
which the operator (s) identify a good sample
as a good one.
Effectiveness = number of correct evaluations
number of total opportunities
12. What is False Alarm?
False Alarm (Fa) – The number of times of
which the operator (s) identify a good sample
as a bad one.
The probability of a false alarm, also known as
Type I error or producer’s risk, is given by:
Fa (False Alarm) = number of false alarms
number of non-defective items
13. What is Miss rate?
A miss is a defective item that is classified as non-
defective.
Miss rate (Mr) – The number of times of which
the operators identify a bad sample as a good
one.
The probability of a miss, also known as Type II
error or consumer’s risk, is given by:
Mr (Miss rate) = number of misses
number of defective items
14. Acceptability criteria:
If all measurement results agree, the gage is
acceptable. If the measurement results do not
agree, the gage can not be accepted, it must be
improved and re-evaluated.
EFFECTIVENESS (< 80% - Not Acceptable)
MISS - RATE ( > 5% - Not Acceptable )
FALSE ALARM RATE( > 10% -Not Acceptable)
15. What could have caused the poor agreement?
What should be done to improve the measurement
system?
What should be done to improve consistency?
Do the Brain
Storming!
16. If any of the decisions disagree, the
measurement system may need improvement.
Improvement actions include:
• Reworking the gage,
• Re‐training the inspectors,
• Clarifying the accept/reject criteria,
• Adding more lighting
After implementing the improvement actions,
repeat the study. If the error cannot be
eliminated,
• Must take appropriate corrective actions, such
as switching to a new measurement system,
adding redundant inspections, or conducting a
more extensive study.
18. ®
-: Drawbacks of Inspection :-
(1) Inspection adds to the cost of the product
but not for its value.
(2) It is partially subjective, often the inspector
has to judge whether a product passes or not.
Example : Inspector discovering a slight burnish
on a surface must decide whether it is bad or it
can fit.
(3) Fatigue and Monotony may affect any
inspection judgment.
(4) Inspection merely separates good and bad
items. It is no way to prevent the production
of bad items.
19. ®
-: Quality Cost :-
(A) Failure costs :-
1) Internal failure costs:- They include,
a) Scrap : The net loss in labor and material resulting from
defectives which cannot economically be repaired or used.
b) Rework : Cost of defect correction to make them fit for use.
c) Retest : The cost of inspection and retest of products that
have undergone rework or other revision.
d) Down time : The cost of idle facilities resulting from defects.
(Example : Printing press down due to paper break).
e) Yield losses : The cost of process yield lower that might be
attainable by improved controls. Includes ‘‘overfill’’ of containers
(going to customers) due to variability in filling and measuring
equipment.
20. ®
-: Quality Cost :-
2) External failure costs:- They includes,
a) Complaint adjustment : All costs of
investigation and adjustment of justified
complaints attributable to defective product
or installation.
b) Returned material : All costs associated
with receipts and returned from the field.
c) Warranty charges : All costs involved in
service to customers under warranty
contracts.
d) Allowances : Costs of concessions made
to customers due to substandard products
being accepted by the customer as is include
loss in income due to down grading products
for sale as seconds.
21. ®
-: Quality Cost :-
(B) Appraisal Costs: These costs include,
a) Incoming material inspection: The cost of
determining the quality of vendor/supplier made
products by inspection on receipt or at source.
b) Inspection and test: The cost of checking the
conformance of the product throughout its
progression, in the factory, including final acceptance
and check of packing. Also includes testing done at
customer’s premises prior to giving up the product to
the customer.
c) Maintaining accuracy of test equipment: Includes
the cost of operating the system that keeps the
measuring instruments and equipment in calibration.
d) Materials and services consumed: Includes costs
of product consumed through destructive tests,
materials consumed and services where significant.
e) Evaluation of stock: Include the costs of testing
products in field storage or in stock to evaluate
22. ®
-: Quality Cost :-
(C) Prevention Costs: It includes :
a) Quality Planning: This includes the broad array of
activities which collectively create quality plan, the
inspection plan, reliability plan, data system and
numeric specialized plans. It includes also preparation
of the manuals and procedures needed to communicate
these plans to all concerned.
b) New Product review: Includes preparation of bid
proposals evaluation of new design, preparation of test
and experiment programs and other quality activities
associated with the launching of new designs.
c) Training: The costs of preparing training programs
for attaining and improving quality performance
includes the cost of conducting formal training
programs as well.
d) Process control: Includes that part of process
control which is conducted to achieve fitness for use as
distinguished from achieving productivity, safety etc.