When thinking about CAA, most people think immediately of multiple-choice questions (MCQs), but there are many other types of question one could computerise. For example, [run through and explain the above, which are the types available via the web version of QMP]. However, there are other types as well, e.g.? [ranking, ‘hot spot’, drag and drop…]
You know the importance of feedback for effective learning. Computers can provide it very well, if there’s a thoughtful human being behind the machinery. Practice wisdom about CAA feedback says it should contain: a statement as to whether the chosen option is correct or not; (if appropriate) a statement which gives the correct answer and a sentence or so on why it is correct; a few words to clear up possible misunderstanding about the wrong choice; the source of the information on the correct answer (to help the student find out more and, importantly, to give staff a starting point if they wish to amend a question, e.g. by supplying more up-to-date data); language which is clear and not discouraging. Going beyond the minimum, you might also include: a related question for students to think about; a web link to further information. If you do this, choose a link that is likely to have a reasonable ‘shelf life’ (i.e. it isn’t likely to disappear quickly). Sometimes it may be better to link to the home page of a website (e.g. HM Prison Service), rather than the precise page (which might be short-lived) containing a particular statistic. For more info, see Chapter 4 of Blueprint
If you’re interested in setting ‘objective’ questions for your students (via CAA or not) a crucial issue is the intellectual level at which they should be pitched. The general expectation is that greater intellectual demands are placed on students as they move through their programmes, e.g. from displaying knowledge and understanding at the beginning, to using critical and evaluative skills by the time they obtain their degrees. Very often, the underpinning educational theoretical material is Bloom’s Taxonomy . Bloom (1956) postulated six types of educational objectives: knowledge comprehension application analysis synthesis evaluation and argued that the further one goes down the list the more difficult they are to attain. It therefore follows that it is unreasonable to require all Level 1 students to show competence in synthesis and evaluation. Conversely, it would not be demanding enough if final year Honours students (Level 3) were only required to show knowledge and comprehension.
Here are some question formats for testing knowledge.
And some for testing comprehension (provided the material hasn’t been taught or discussed in class, in which case they would simply be knowledge recall questions).
SPQR could be used in lots of ways. Some highlighted here. Some ideas mine, others from Editorial Board, students who have tried SPQR, literature on CAA and feedback. Left column: Using collections of questions. Most people’s first thought is summative assessment (save marking time, etc) but may reject that for various reasons (inc. being too high stakes). May then start thinking about other uses of collections of questions. Right column: Using individual questions. Idea came to me gradually at first, but boosted by reading Littlejohn (2003) on reusing online resources. Makes the telling point that academics are v. resistant to using existing resources (e.g. books, videos) ‘as is’. (Related to ‘not invented here’ problem) Prefer to use bits, woven into their own, personal ‘lesson plan’. Same is true for electronic resources – much more likely that s.p. teachers will use small items from elsewhere, customised and perhaps adapted. Littlejohn makes the point strongly that for electronic learning resources to have much chance of being re-used by others, they should consist of small, ‘granular’ ‘learning objects’. SPQR items (Q, A, feedback notes) fit the bill well. So here are some ideas for using items individually. Easy to include or exclude the feedback notes. Any more suggestions? DISCUSSION
Take-up of CAA has been highly variable across disciplines. More readily in maths, science, engineering, medicine than in some other areas, e.g. social sciences. Discuss in small groups (10 minutes): Use of CAA in your discipline (that you know of). Potential/advantages, if any. Limitations/disadvantages that you see. Work with the group to come up with two lists: (Get a volunteer scribe) Advantages include: Save staff time in marking, rapid feedback (esp. formative feedback), consistency with teaching modes (use of computers). Objections include: Risk of system failure, Security, Measurement issues (student familiarity, speed of working), ‘It’s Mickey Mouse’ (‘objective testing’ is allegedly good for only very low level testing of useless information. Variation: ‘Tests using QuestionMark have no educational value ’.), It’s rote learning ( there’s no point in testing empty facts . Note the pejorative ‘empty’. Assumption that facts have no place in HE.), Discipline discursive, contested ( assumption that this is entirely true.), ‘Not invented here’ (syndrome is quite powerful in British HE), Hard to create good questions . (E.g. if multiple-choice, a good ‘key’ and plausible distractors. Tough, too, to write good, helpful feedback.), Reluctance to share questions. Anyone think that there is no potential at all for CAA in their discipline?
A qualitative study, but here are a few numbers. Also: wide range of subjects.
Simon to do this.
End with a joke: “ To err is human, but to really screw things up requires a computer.”