Presented the 25th May 2019 at the conference Artificial Intelligence and Adaptive Education (AIAED'19) Beijing, China.
Abstract: We introduce the Multimodal Learning Analytics Pipeline, a generic approach for collecting and exploiting multimodal data to support learning activities across physical and digital spaces. The MMLA Pipeline facilitates researchers in setting up their multimodal experiments, reducing setup and configuration time required for collecting meaningful datasets. Using the MMLA Pipeline, researchers can decide to use a set of custom sensors to track different modalities, including behavioural cues or affective states. Hence, researchers can quickly obtain multimodal sessions consisting of synchronised sensor data and video recordings. They can analyse and annotate the sessions recorded and train machine learning algorithms to classify or predict the patterns investigated.
âĽđ 7737669865 đâť malwa Call-girls in Women Seeking Men đmalwađ Escorts Ser...
Â
The Multimodal Learning Analytics Pipeline
1.
2. The Multimodal
Learning Analytics
Pipeline
Daniele Di Mitri1, Jan Schneider2,
Marcus Specht3, Hendrik Drachsler2
1 Open University of The Netherlands, The Netherlands
2 German Institute for International Educational Research, Germany
3 Delft University of Technology, The Netherlands
3. ⢠We introduce the Multimodal Learning
Analytics Pipeline (MMLA Pipeline)
⢠a generic approach for collecting and exploiting
multimodal data to support learning activities across
physical and digital spaces
⢠Using Internet of Things devices, Wearable sensors,
Signal Processing and facilitating Machine Learning
Introduction
4. 1. Psychomotor learning
⢠coordination between body and mind
2. Multimodal Learning
⢠Modes: text, image, speech, haptic
⢠Modalities: speaking, gesturing, moving,
facial expressions, physiological signals
3. Embodied communication
⢠People communicate using whole body
Relevant Theories of Learning
5. Multimodal Learning Analytics
(MMLA)
Learning Analytics approach
Measurement, collection, analysis and
reporting of data about learners
+
Data from multiple modalities
=
More accurate representation
of the learning process!
6. ⢠Problem: multimodal data is complex
⢠multi-dimensional, have different format, support
⢠Itâs noisy, messy,
⢠Difficult to store, synchronize, annotate, and exploit
⢠Solution: MMLA Pipeline
⢠Support researchers in setting up experiments
much more quickly
⢠Standard tools over tailor-made solutions
⢠Reduce data manipulation over-head
⢠Focus on analysis
Enabling technological advances
7. Graphical overview
Task model 3rd party
sensors or API
2. Data
storing
Dashboards
Physiological
sensor data
Motoric
sensor data
(D)
Historical
reports
(B)
Predictions
(C)
Patterns
5. Data
exploitation
1. Data
collection
Processed
data store
4. Data
processing
Predictions
Model
ďŹtting
(A)
Corrective
feedback
Intelligent Tutors
(A)
Evaluation
Expert reports
3. Data
annotation
(B,C)
Prediction
models
(D)
(D)
B,C
(B,C)
RESEARCHPRODUCTION
correctionsawareness orchestration adaptation
8. Current prototypes
1. Multimodal Learning Hub
(Schneider et al., 2018)
data collection, data storing
2. Visual Inspection Tool
(Di Mitri et al., 2019)
data annotation
9. MLT data format
example of serialisation
of Myo data in JSON
MLT = Meaningful Learning Experience
example annotation.json
10. A. Corrective feedback: hardcoded rules
e.g. âif sensor value is x then yâ; (non-
adaptive)
B. Classification/Prediction: estimation
of the learning labels (adaptive)
C. Pattern identification mining of recurrent
sensor values
D. Historical reports: visualizations and
analytics dashboard
Exploitation strategies
12. ⢠All cases are Multimodal Tutors
⢠intelligent tutoring systems that use multimodal data
⢠The MMLA Pipeline supports
⢠the collection, analysis, annotation and exploitation of
multimodal data
⢠Current prototype
⢠optimized for individual learning
⢠recording of ~10 minutes, retrospective feedback
⢠Future prototypes
⢠real-time multimodal feedback
⢠collaborative learning
⢠longer recorded sessions
Evidence of potential impact
13. ⢠The MMLA Pipeline is a generic and useful
approach for researchers
⢠Flexible and extensible applications
⢠Can be used with a range of sensor
applications
⢠The different components are Open Source
⢠Available for demo!
Summary
14. ⢠Schneider, J., Di Mitri, D., Limbu, B., & Drachsler, H. (2018). Multimodal Learning
Hub: A Tool for Capturing Customizable Multimodal Learning Experiences, 1, 45â
58. http://doi.org/10.1007/978-3-319-98572-5_4
⢠Di Mitri D, Schneider J, Specht M, Drachsler H. From signals to knowledge: A
conceptual model for multimodal learning analytics. J Comput Assist Learn.
2018;34:338â349. https://doi.org/10.1111/jcal.12288
⢠Di Mitri D., Schneider J., Specht M., Drachsler H. (2019) Read Between the Lines:
An Annotation Tool for Multimodal Data for Learning. In Proceedings of the 9th
International Conference on Learning Analytics & Knowledge - LAK19 (pp. 51â60).
New York, NY, USA: ACM. http://doi.org/10.1145/3303772.3303776
⢠Di Mitri D. (2018) Multimodal Tutor for CPR. In: Penstein RosÊ C. et al. (eds)
Artificial Intelligence in Education. AIED 2018. Lecture Notes in Computer Science,
vol 10948. Springer, Cham. http://doi.org/10.1007/978-3-319-93846-2_96
⢠Di Mitri, D., Schneider, J., Specht, M., & Drachsler, H. (2018) The Big Five:
Addressing Recurrent Multimodal Learning Data Challenges. In R. Martinez-
Maldonado et al. (Eds.), Proceedings of the Second Multimodal Learning Analytics
Across (Physical and Digital) Spaces (CrossMMLA), Vol. 2163. CEUR
Proceedings. http://ceur-ws.org/Vol-2163/#paper6
Useful references