How to analyze the items of a questionnaire and to find out Difficulty Value (DV), Discriminative Power (DP) and Distracter Analysis.
Presented to Dr. Huma Lodhi at the University of Education.
3. ITEM ANALYSIS
Item analysis is a statistical technique
which is used for selecting and rejecting the
items of the test on the basis of quality of the
distracters, their difficulty value and
discrimination power
4. OBJECTIVES OF ITEM ANALYSIS
To select appropriate items for the final draft
To obtain the information about the difficulty value(D.V) of all
the items
To provide discriminatory power (D.I) to differentiate between
capable and less capable examinees for the items
To provide modification to be made in some of the items
To prepare the final draft properly ( easy to difficult items)
5. STEPS OF ITEM ANAYSIS
Arrange the scores in descending order
Separate two sub groups of the test papers
Take 27% of the scores out of the highest scores and 27% of the
scores falling at bottom
Count the number of right answer in highest group (R.H) and
count the no of right answer in lowest group (R.L)
Count the non-response (N.R) examinees
6. Item analysis is done for obtaining:
a) Difficulty value (D.V)
b) Discriminative power (D.P)
c) Distracter Analysis
7. DIFFICULTY VALUE (D.V)
“The difficulty value of an item is defined as the
proportion or percentage of the examinees who have
answered the item correctly”
- J.P. Guilford
8. The formula for difficulty value (D.V)
D.V = (R.H + R.L)/ (N.H + N.L)
• R.H – rightly answered in highest group
• R.L - rightly answered in lowest group
• N.H – no of examinees in highest group
• N.L - no of examinees in lowest group
9. In case non-response examinees available
means,
The formula for difficulty value (D.V)
D.V = (R.H + R.L)/ [(N.H + N.L)- N.R]
• R.H – rightly answered in highest group
• R.L - rightly answered in lowest group
• N.H – no of examinees in highest group
• N.L - no of examinees in lowest group
• N.R – no of non-response examinees
10. DISCRIMINATION INDEX (D.I)
“Index of discrimination is that ability of an item on
the basis of which the discrimination is made
between superiors and inferiors”
- Blood and Budd (1972)
11. TYPES OF DISCRIMINATION INDEX (D.I)
Zero discrimination or No discrimination
Positive discrimination
Negative discrimination
12. ZERO DISCRIMINATION OR NO
DISCRIMINATION
The item of the test is answered correctly or
know the answer by all the examinee’s
An item is not answered correctly any of the
examinee
13. POSITIVE DISCRIMINATION INDEX
An item is correctly answered by superiors and is
not answered correctly by inferiors. The
discriminative power range from +1 to -1.
15. The formula for discrimination index(D.I)
D.I = (R.H - R.L)/ (N.H or N.L)
• R.H – rightly answered in highest group
• R.L - rightly answered in lowest group
• N.H – no of examinees in highest group
• N.L - no of examinees in lowest group
16. Another method for discrimination index(D.I)
If you take Likert scale for data collection
The correlation between each item scores with total
scores.
17. General guidelines for discriminating index (D.I)
According to Ebel ,
D.I Item Evaluation
≥0.40 Very good items
0.30 - 0.39 Reasonably good but subject to
improvement
0.20 – 0.29 Marginal items , need
improvement
<0.19 Poor items . Rejected or revised
18. General guidelines for difficulty value (D.V)
Low difficulty value index means, that item is high difficulty one
ex: D.V=0.20 » 20% only answered correctly for that item.
So that item is too difficult
High difficulty value index means, that item is easy one
ex: D.V=0.80 » 80% answered correctly for that item. So that
item is too easy one
20. Relationship between difficulty value and
discrimination power
Both (D.V & D.I) are complementary not contradictory to each other
Both should considered in selecting good items
If an item has negatively discriminate or zero discrimination, is to be
rejected whatever the difficulty value
21. CRITERIA FOR SELECTION AND REJECTION
ITEMS
Positive discrimination index only selected
Negative and zero discrimination index items are
rejected
High and low difficulty value items are rejected
23. DISTRACTER ANALYSIS
First question of item analysis: How many
people choose each response?
If there is only one best response, then all
other response options are distracters.
Example (N = 35):
Which method has the best internal consistency? #
a) projective test 1
b) peer ratings 1
c) forced choice 21
d) differences n.s. 12
24. DISTRACTER ANALYSIS (CONT’D)
A perfect test item would have 2 characteristics:
1. Everyone who knows the item gets it right
2. People who do not know the item will have responses
equally distributed across the wrong answers.
It is not desirable to have one of the distracters chosen
more often than the correct answer.
This result indicates a potential problem with the
question. This distracter may be too similar to the correct
answer and/or there may be something in either the stem
or the alternatives that is misleading.
25. DISTRACTOR ANALYSIS (CONT’D)
Calculate the # of people expected to choose each of the
distracters. If random same expected number for each
wrong response.
N answering incorrectly 14
Number of distracters 3
# of Persons
Exp. To Choose
Distractor
= = 4.7
26. DISTRACTER ANALYSIS (CONT’D)
When the number of persons choosing a distracter
significantly exceeds the number expected, there are 2
possibilities:
1. It is possible that the choice reflects partial knowledge
2. The item is a poorly worded trick question
unpopular distracters may lower item and test difficulty
because it is easily eliminated
extremely popular is likely to lower the reliability and
validity of the test