3. OBJECTIVES
This section enables the students to:
1. Recognize the importance of data gathering;
2. Identify the various data collection techniques and sources of
data;
3. Distinguish primary from secondary data sources;
4.Describe the various instruments for data gathering;
5. Cite the advantages of the use of such instruments ;
6. Recognize the limitations of certain researcher instruments ; and
7. Design instruments for data gathering.
4. If one collects the wrong data , the
analysis ,interpretation and conclusions
made from such data would be wrong.
A “good” research study is largely
dependent upon the kind of instruments
used and how they are administered.
5. 1.Is the tool appropriate for the study?
2. Was there a trial run of the tool to determine the difficulty and validity
of the items included?
3. Are the items in the instrument relevant to the problem on hand?
4. How long does it take to finish answering the instrument?
5. Are the questions clearly stated?
6. Has the instrument stood the test of time? How popular is it?
7. What are the critiques on its use? Were these considered?
8. Will responses yield to quantification and descriptive qualification?
9. Is the instrument easy to administer?
10. Is scoring facilitated?
8. Advantages:
Less expensive to administer
Greater confidence of respondents anonymity
Less pressure on the part of the respondents for immediate
response
Limitations:
Data collected depends largely upon the information voluntarily
supplied by the respondent.
Researcher does not have a chance to probe into a topic.
Mailed questionnaires – problem of returns
9. Advantages:
The researcher does not encounter problems of
missing information , blank items and others
No problem about misunderstood questions
Probing is not a problem
Limitations:
A lot of time and money is spent
Heavy reliance upon verbal reports , the veracity
of which is not easily checked
10. Less expensive , with relative rapid completion
and high response rates
The researcher is limited to telephone
subscribers , which generally are not
representative of the population.
Impossible to conduct a lengthy interview over
the telephone.
11. are generally used to gather information about
people , mostly about their socio-demographic
characteristics , their knowledge , attitudes ,
feelings , motivations , anticipations and future
plans , or past behaviour.
Mostly depend on verbal reports , the questions
must be carefully formulated so that the researcher
does not get erroneous data.
12. 1. Define or qualify terms that could easily be
misinterpreted.
2. Beware of double negatives.
3. Be careful of inadequate alternatives.
4. Double-barrelled questions should be avoided.
5. Underline a word if you wish to indicate special
emphasis.
6. When asking for ratings or comparisons , a point of
reference is necessary.
7. Design questions that will give a complete response.
8. Phrase questions so that they are appropriate for all
respondents .
9. Questions must not suggest answers .
(Best and Kahn , 1998)
14. Defined as a face-to-face interaction
between two persons.
3 Elements:
Interviewer – the one who asks questions
Interviewee or respondent – the one who
supplies the information asked.
Interview schedule – formal list of questions
used in the interview.
15. Scheduled-structured interview – uses an instrument in which
the questions , their wording , and their sequence are fixed and
are identical for every respondent.
Nonscheduled-structured interview – uses only guide questions
for the interview.
Nonscheduled interview – does not use pre-specified set of
questions . The interviewee does most of the talking , with little
or no direction from the interviewer.
16. an instrument that attempts to obtain the measured attitude or belief of an
individual.
Semantic Differential Scale –attempts to find the meanings that objects and
people possess.
Likert Scale – most commonly used attitude scale in educational research
named after the man who designed it.
Projective methods – involve some sort of imaginative activity on the part of
the individual in interpreting ambiguous stimuli.
17. TECHNIQUES
1. Semantic Differential Scale – attempts to find the meanings
that objects and people possess.
2. Likert Scale – measure of attitude , feelings and behaviours
of the students.
3. Projective Methods – involve some sort of imaginative
activity on the part of the individual in interpreting
ambiguous stimuli.
18. a process whereby the researcher watches the research situation.
Guidelines to Good Observation
1. The observation scheme must be carefully planned.
Structured Observation – refers to the presence of guide or tools to delimit the
subject for observation
Unstructured Observation – refers to the use of an observation guide where the
observer watches events pertinent to his purpose.
2. The observer must be objective .
3. The observer must be able to separate facts from interpretation of the facts.
4. Observations must be carefully and expertly recorded and may be recorded
periodically.
19. It demands less subjects under observation but permits recording of
data (behaviour) simultaneously with its spontaneous occurrence.
When people know that they are being observed , they may
deliberately try to create favorable or unfavorable impressions on
the observer.
There are unforeseeable factors such as weather conditions that
may interfere with observational tasks.
20. OBJECTIVE METHODS OF OBSERVATION
Test – systematic procedure in which the individual tested
is presented with a set of constructed stimuli to which he
responds(Antes and Hopkins,1993)
Scale – set of symbols or numerals so constructed that the
symbols or numerals can be assigned by rule to the
individuals to whom the scale is applied.
21. means the extent to which a test is dependable , stable , and self-consistent.
3 APPROACHES
1. Stability
Method: Test-retest
2. Equivalence
Method: Parallel forms (Alternate forms or Equivalent forms)
3. Internal Consistency
Methods:
• A. Split-Half Method
- Spearman-Brown Prophecy Formula
• B. Kuder-Richardson Methods
1. Kuder-Richardson Formula 20
23. One can increase reliability if external sources of variation are
minimized and the conditions under which the measurement occurs
are standardized .
RELIABILITY COEFFICIENT MAY AFFECT BY:
1. Length of the test
2. Degree of homogeneity of content
3. Ability range of students
4. Appropriateness of items
5. Scoring accuracy
6. Testing conditions
7. Speededness
24. VALIDITY ANALYSIS
Definition: A test is valid to the extent that inferences made from it
are appropriate , meaningful and useful.
The validity of a test/scale is the extent to which it measures what it
claims to measure.
25. 3 CATEGORIES IN ESTABLISHING VALIDITY
1. Content Validity
2. Criterion-Related Validity
• Predictive Validity Studies
• Concurrent Validity Studies
3. Face Validity
A construct is a theoretical , intangible quality or trait
in which individuals differ(Messick , as cited by
Gregory,1996).
26. APPROACHES TO CONSTRUCT VALIDITY
1. Analysis to determine if the test items or sub-tests are homogeneous and
therefore measures a single construct.
2. Study of developmental changes to determine if they are consistent with the
theory of the construct.
3.Research to ascertain if group differences on test scores are theory-consistent.
4. Analysis to determine if intervention effects on test scores are theory-
consistent.
5. Correlation of the test with other related and unrelated tests and measures.
6. Factor analysis of test scores in relation to other sources of information.