Nowadays the web is an ubiquitously available source of information that can be accessed through a broad range of devices, such as smart phones, tablets and notebooks. Although web applications can be used through several devices, they are controlled and designed for a one-to-one connection type of interaction, which prevents device-spanning multi-modal interactions.
We propose a model-based run-time framework to design and execute multi-modal interfaces for the web. Different to a model-based design that implements reification, a process to derive concrete models from abstract ones by transformation, we design interactors that keep all design models alive at run-time. Interactors are based on finite state machines that can be inspected and manipulated at run-time and are synchronized over different devices and modalities using mappings. We show the expressiveness of state charts for modeling interactions, interaction resources, and interaction paradigms.
We proof our approach by checking its conformance against common requirements for multimodal frameworks, classify it based on characteristics identified by others, and present initial results of a performance analysis.