Web & Social Media Analytics Previous Year Question Paper.pdf
iPads & Mobiles: Let's (NOT) Do More Media Comparison Studies
1. IPADS & MOBILES: LET’S (NOT) DO MORE
MEDIA COMPARISON STUDIES
C H A R L E S B . H O D G E S , P H . D .
2. FREQUENT QUESTIONS
• Is online instruction as good as face-to-face
instruction?
• Are computer-based lectures as good face-to-face
lectures?
• Is reading text on an eReader as effective as
reading the text from a paper-based book?
2
3. CHARGE
• You will see published research studies where
questions like those on the last slide were
investigated. Before asking questions like these
yourself, consider the following viewpoints.
• References for the papers and resources highlighted
are provided on the next-to-last slide in this
presentation.
•
3
4. LOCKEE, BURTON, CROSS, 1999
• “Stakeholders desire to prove that participants in distance-delivered
courses receive the same quality instruction off-campus as those
involved in the ‘traditional’ classroom setting. However, the desire to
prove that the quality of such distributed offerings is equal to the
quality of on-campus programming often results in comparisons of
achievement between the two groups of student participants.
Statistically, such a research design almost guarantees that the
desired outcome will be attained--that indeed distance learners
perform as well as campus-based students.” (p. 33)
•
4
5. LOCKEE, BURTON, CROSS, 1999
• This excellent paper includes the evolution of media
comparison studies in the professional literature and
a discussion of the several problems associated with
such studies.
• Even better, the authors suggest appropriate designs
for evaluation and research in distance education.
5
6. MAYER, 2001
• “The research question for media effects concerns
whether students learn more deeply when material
is presented via one medium - such as computer-
based animation and narration - than another
medium - such as book-based illustrations and text.
In short, we can ask, “Are computers more effective
than textbooks?” (p. 69)
6
7. MAYER, 2001
• “Media scholars have come to the conclusion that it
is not productive to continue with traditional media
research, in which one medium is compared to
another (Clark, 1983; Clark & Salomon, 1986;
Salomon, 1994; Wetzel, Radtke, & Stern, 1994). Media
research can be criticized on empirical,
methodological, conceptual, and theoretical
grounds.” (p. 70)
7
8. MAYER, 2001
• “In summary, the consensus among educational
psychologists is that questions about which medium
is best are somewhat unproductive questions.”
• (p. 71)
•
8
9. HEAD, LOCKEE, & OLIVER, 2002
• Three “M”s of the distance education environment
• Method
• Media
• Mode
9
10. HEAD, LOCKEE, & OLIVER, 2002
• Three “M”s of the distance education environment
• Method:
• strategies/techniques to facilitate learning outcomes
• For example: project-based learning, class
discussion, one-to-many lecture
10
11. HEAD, LOCKEE, & OLIVER, 2002
• Three “M”s of the distance education environment
• Media:
• “Media attributes are traditionally defined as ‘...the
properties of stimulus materials which are manifest in
the physical parameters of media’ (Levie & Dickie,
1971, p. 860)” (p. 262)
• Examples: auditory or visual stimulus, level of realism,
ability to assess progress, or ability to replay
11
12. HEAD, LOCKEE, & OLIVER, 2002
• Mode:
• The system used to convey instruction
• For example: online, face-to-face, iPad
•
12
13. HEAD, LOCKEE, & OLIVER, 2002
• “When assessing the effectiveness of a given
distance education experience, perhaps evaluators
will focus the questions on the many factors that
influence learning, such as the selection of
appropriate instructional methods or the leveraging
of suitable media attributes, instead of simply
concentrating on the delivery mode itself.” (p. 267)
13
14. LOCKEE, MOORE, & BURTON, 2002
• Regarding the evaluation of distance education...
• “Unfortunately, what may seem the most logical approach to
determining effectiveness is often theoretically unsound. For example,
comparing student achievement between distance and face-to-face
courses may seem a simple solution, yet the design is flawed for a
number of reasons.” (p. 21)
• This practical article provides suggestions for appropriate evaluations
of distance education programs that can be adapted for current
technologies.
•
14
15. NO SIGNIFICANT DIFFERENCE
WEBSITE
• Thomas R. Russell
• Website companion to book, The No Significant
Difference Phenomenon
• http://www.nosignificantdifference.org/
•
15
16. NO SIGNIFICANT DIFFERENCE
WEBSITE
• http://www.nosignificantdifference.org/faq.asp#Q1
• “Mr. Russell found that an overwhelming number of studies showed
that when the course materials and teaching methodology were held
constant, there were no significant differences (NSD) between student
outcomes in a distance delivery course as compared to a face to
face course. In other words, student outcomes in distance delivery
courses were neither worse nor better than those in face to face
courses.”
• Key point -- “neither worse nor better”.
16
17. FINAL CHARGE
• Why would you expect reading (or anything else) on
an iPad to be somehow “better”? Are there
important variables to study other than simply “on
iPad” vs. “not on iPad” ?
• What does “no significant difference” really mean?
• Think Critically.
17
18. REFERENCES
• Head, T.J., Lockee, B.B., & Oliver, K.M. (2002). Method, media, and mode: Clarifying the
discussion of distance education effectiveness. Quarterly Review of Distance Education,
3(3), 261-268.
• Lockee, B.B., Burton, J.K., & Cross, L. (1999). No comparison: Distance education finds a
new use for ‘no significant difference’. Educational Technology Research & Development,
47(3), 33-42.
• Lockee, B.B., Moore, M., & Burton, J.K. (2002). Measuring success: Evaluation strategies for
distance education. EDUCAUSE Quarterly, 1, 20-26. Retrieved from: http://
net.educause.edu/ir/library/pdf/EQM0213.pdf
• Mayer, R.E. (2001). Multimedia learning. New York: Cambridge University Press.
•
18