The document provides an overview of the IMS Question & Test Interoperability (QTI) specification, which describes a data model for representing assessment content and results. QTI allows for the exchange of assessment items between authoring tools, item banks, test builders, and learning systems. It has undergone several versions since 1999 to support additional features like adaptive testing and metadata.
2. Overview
• The IMS Question & Test Interoperability
(QTI) specification describes a data
model for the representation of question
and test data and their corresponding
results reports
• Exchange item among authoring
tools, item banks, test constructional
tools, learning systems, and assessment
delivery systems
3. History
• March 1999 – initial V0.5 specification
• November 1999 – IMS Question & Test
Interoperability V1.0
• March 2003 – QTI V1.2.1
• September 2003 – draft QTI V2.0
• June 2006 – QTI V2.1 (current version)
4. Specification Use Cases
• Provide a well documented content format for storing
and exchanging items independent of the authoring
tool used to create them
• Support the deployment of item banks across a wide
range of learning and assessment delivery systems
• Provide a well documented content format for storing
and exchanging tests independent of the test
construction tool used to create them
• Support the deployment of items, item banks, and tests
from diverse sources in a single learning or
assessment delivery system
• Provide systems with the ability to report test results in
a consistent manner
5. The Role of Assessment Tests
and Assessment Items
6. Tools
• Authoring Tool: creating or modifying an
assessment item
• Item Bank: collecting and managing items
• Test Construction Tool: assembling tests from
individual items
• Assessment Delivery System: managing the
delivery of assessments to candidates
• Learning System: enables or directs learners in
learning activities
7. Actors
• Author: the author of an assessment item,
quality can be controlled by another person
• Item Bank Manager: managing a collection of
assessment items
• Test Constructor: create tests from items
• Proctor: overseeing the delivery of an
assessment
• Scorer: assessing the candidate's responses,
can be replaced by an auto system
• Tutor: supporting the learning process for a
learner
• Candidate: person being assessed
8. Structure of this
Specification
• IMS Question & Test Interoperability Overview
• IMS Question & Test Interoperability Implementation Guide
• IMS Question & Test Interoperability Assessment Test, Section, and
Item Information Model
• IMS Question & Test Interoperability XML Binding
• IMS Question & Test Interoperability Results Reporting
• IMS Question & Test Interoperability Integration Guide
• IMS Question & Test Interoperability Conformance Guide
• IMS Question & Test Interoperability Meta-data and Usage Data
• IMS Question & Test Interoperability Migration Guide
13. Composition of Water
• multiple
responses
• multiple
answers
• one point is
given to each
correct answer
• a 3rd incorrect
answer reduce 2
points
14.
15. Grand Prix of Bahrain
• the correct answer is composed of an ordered list of
values
• the shuffle attribute tells the delivery engine to shuffle
the order of the choices before displaying them to the
candidate
• use the standard response processing template (score
1 or 0)
19. Characters and Plays
• directed pair: from a source set into a target set
• each character can be in only one play
• each play could contain all the 4 characters
20.
21. Richard III (Take 1)
• selecting choices (buttons) and using
them to fill the gaps
22.
23. Richard III (Take 2)
• use the combo box to fill each in-line
choice independently
24.
25. Richard III (Take 3)
• use a text entry (i.e., fill-in-blank)
• expected length = 15
• matching is case sensitive
26.
27. Writing a Postcard
• multiple lines of answers
• no response processing (i.e., grading)
28.
29. Olympic Games
• similar to simple choice
• choices have to be presented in the
context of the surrounding text
33. Where is Edinburgh?
• mark a coordinate on the map
• area mapping is used to check the answer
That is, a circle centered at (102,113) with a radius of 16
34.
35. Flying Home
• the correct
answer is
composed of
an ordered list
of values
• presented as
hotspots
36.
37. Low-cost Flying
• pairing up the choices
• max number of pairs = 3
• max number of matching items for each item = 3
41. Airport Locations
• select a coordinate on the map by positioning a given
object
• area mapping is used to check the answer
42.
43. Jedi Knights
• to obtain a percentage
• give partial credits to close percentages
• Give lower bound, upper bound, and step
44.
45. Composite Items
• Composite items are items that contain
more than one point of interaction
• Composite items may contain multiple
instances of the same type of interaction
or have a mixture of interaction types
46. Response Processing
• Standard response processing templates were
used in previous examples
• A more general response processing model is
needed
– Example: provide partial credits to ordering even the
response is not exactly the same as the correct
answer
• Response processing consists of a sequence
of rules that are carried out, in order, by the
response processor
47. Feedback
• Feedback consists of material presented to the
candidate conditionally based on the result of
Response Processing
– i.e., instant hints
• Modal feedback is shown to the candidate after
response processing has taken place and
before any subsequent attempt or review of the
item
• Integrated feedback is embedded into the
itemBody and is only shown during subsequent
attempts or review
48. Mexican President
• The feedback shown depends directly on
the response given by the candidate
49.
50. Adaptive Items
• New feature of QTI version 2
• Allows an item to be scored adaptively
over a sequence of attempts; the scoring
is based on the actual strategy you took
• Adaptive items must provide feedback to
the candidate in order to allow them to
adjust their responses
58. Collections of Item Outcomes
• Two Sections
(sectionA and
sectionB)
• navigation mode is
nonlinear (choose
any item)
• The submission
mode is set to
simultaneous (at the
end of test)
• Assigning different
weights to each item
59. Additional Functions
• Categories of Item
• Arbitrary Weighting of Item Outcomes
• Specifying the Number of Allowed Attempts
• Controlling Item Feedback in Relation to the
Test
• Duration of Tests
• Early Termination of Test
• Branching Based on the Response to an
Assessment Item
• Randomizing the Order of Items and Sections
60. Packaged Items, Tests and
Meta-data
• Both single item and multiple items can
be packed
• Packed by a file: imsmanifest.xml
• The manifest file demonstrates the use of
a resource element to associate meta-
data (both LOM and QTI) with an item
and the file element to reference the
assessmentItem XML file and the
associated image file
61. Meta-data and Usage Data
• The IEEE LOM standard defines a set of meta-data
elements that can be used to describe learning
resources, but does not describe assessment
resources in sufficient detail
• New Meta-data Elements in IMS QTI v2.0 (extends the
IEEE LOM to meet the specific needs of QTI)
• QTI version 2.1 further extends this to enable the
description of tests, pools, and object banks
• Secondary meta-data, sometimes known as 'usage
data' (item statistics), is defined separately in its own
data model
Revision: 8 June 2006
62. New Meta-data Elements in
IMS QTI v2.0
• New category of meta-data
• qtiMetadata
– itemTemplate
– timeDependent
– Composite
– interactionType
– feedbackType
– solutionAvailable
– toolName
– toolVersion
– toolVendor
64. Feedback Type
• None: no feedback is available
• Nonadaptive: feedback is available but it
is non-adaptive
• Adaptive: feedback is available and is
adaptive
65. IEEE LOM Profile
• A few suggestions to the usage of IEEE
LOM, when applied to items of QTI 2.0
66. IEEE LOM - General
• General
– Identifier
– Title
– Language
– Description
– Keyword
– Coverage
67. IEEE LOM – Lifecycle, Meta-
metadata
• Lifecycle
– Version
– Status
– Contribute
• Meta-metadata
– Identifier
– Contribute
– Metadata_schema
– Language
68. IEEE LOM –
Technical, Educational
• Technical
– Format
– Size
– Location
– Other Platform Requirements
• Educational
– Context
– typical_learning_time
– Description
– Language
70. Usage Data
• QTI defines a separate class for describing item
statistics
• An optional URI that identifies the default glossary in
which the names of the itemStatistics are defined
• itemStatistic
– Name
– Glossary
– Context
– CaseCount
– stdError
– stdDeviation
– lastUpdated
– targetObject
• Identifier
• partIdentifier
– ordinaryStatistic
– categorizedStatistic
71. XML Binding
• The accompanying XML binding provides
a binding for the qtiMetadata object
• The qtiMetadata class defines a new
category that could appear alongside
LOM categories
• qtiMetadata is bound separately and
must be used in parallel to the LOM
object as an additional meta-data object
72. Interoperability Assessment
Test, Section, and Item
Information Model
• The reference guide to the main data
model for assessment tests and items.
The document provides detailed
information about the model and
specifies the requirements of delivery
engines and authoring systems.
Revision: 8 June 2006
73. Results Reporting
• A reference guide to the data model for
result reporting. The document provides
detailed information about the model and
specifies the associated requirements on
delivery engines.
Revision: 8 June 2006
75. Summary
• Representation of question and test data
and their corresponding results reports
• Developed by IMS
• Can be combined with SCORM
• Common Cartridge