1) The document proposes a model-based methodology for developing aircraft human-machine interfaces (AHMIs) that allows for automated evaluation.
2) Key aspects of the methodology include models for user interface elements, a user interface description language, and a model-based design process aligned with the Cameleon reference framework.
3) The methodology aims to integrate automated evaluation into the AHMI design loop by specifying interfaces with UsiXML models and evaluating them for usability guidelines, execution time, and workload.
1. *
Université catholique de Louvain (UCL)
Belgian Laboratory of Computer-Human Interaction (BCHI)
{juan.m.gonzalez, jean.vanderdonckt}@uclouvain.be
†
OFFIS e.V., Escherweg 2, 26127,
Oldenburg, Germany
{luedtke, osterloh}@offis.de
Towards Model-Based AHMI
Development
Juan Manuel Gonzalez-Calleros*
, Jean Vanderdonckt*
,
Andreas Lüdtke†
, Jan-Patrick Osterloh†
2. • Existing work on automated evaluation of AHMI
• Some studies suggested that AHMI and software user interfaces
share several similarities:
• In terms of interaction techniques for input/output
Williges, R.C., Williges, B.H., and Fainter, R.G., Software interfaces for aviation systems, in
Human Factors in Aviation, E.L. Wiener and D.C. Nagel (Eds.) 1988, Academic Press: San
Diego: pp. 463-493.
• In terms of automatic analysis of workload
Eldredge, D., S. Mangold, and R.S. Dodd, A Review and Discussion of Flight Management
System Incidents Reported to the Aviation Safety Reporting System 1992, Battelle/U.S.
Dept. of Transportation.
• In terms of automatic evaluation based on goal models
Irving, S., Polson, P., Irving, J.E., A GOMS Analysis of the Advanced Automated Cockpit, Proc.
of ACM Conf. CHI’94, ACM Press, New York, 1994, pp. 344-350
Introduction
30 January 2015 Page 2HCI-AERO, November 3-5 2010
3. • Several challenges for Aircraft cockpit design, such as:
• Introduction of new technologies break previous user experience.
Introduction
30 January 2015 Page 3HCI-AERO, November 3-5 2010
4. • New challenges for AHMI: analysis, design, implementa-
tion, and evaluation
• Integrating evaluation in the loop of AHMI life cycle
involves the use of pilots and a physical simulator.
• Shortcomings: availability, cost, time,…
• Human project: new ways to substitute pilots and physical
simulator are explored by coupling them to virtual simulator
• Our goal: to introduce UI automated evaluation (workload,
execution time, usability guidelines) in the life cycle
Introduction
30 January 2015 HCI-AERO, November 3-5 2010 Page 4
5. • Interactive Cooperative Objects (ICOs)
• Used to model aircraft interactive systems (air traffic workstations,
civil aircraft cockpit and military aircraft cockpit)
• Formal description for interactive cockpit applications using Petri
nets
• Behavioural aspects of systems in the cockpit are modelled
• Recent work on usability, reliability, and scalability
• Navarre, D., Palanque, Ph., Ladry, J.-F., Barboni, E., ICOs: A model-based user interface description
technique dedicated to interactive systems addressing usability, reliability and scalability, November 2009,
Transactions on Computer-Human Interaction (TOCHI) , Volume 16 Issue 4
• Problems
• There is a limited set of widgets, User Interface models and
guidelines for the development.
State of the Art
30 January 2015 HCI-AERO, November 3-5 2010 Page 5
6. • ARINC 661 Standard
• Defines protocols to communicate the dialogue and the functional core
of cockpit display system:
• a user application is defined as a system that has two-way
communication with the CDS (Cockpit Display System):
• Transmission of data to the CDS, possibly displayed to the flight deck crew
• Reception of input from interactive items managed by the CDS
• A set of widgets is included as a recommendation
• Problems
• No design guidelines for the widgets,
• No method to design UIs are considered in the standard
• Each manufacturer is free to implement their own understanding about
the standard
• The ARINC standard is not used for primary cockpit applications
(AHMI)
• Only deals with secondary applications involved in the management of
the flight such as the ones allocated to the Multiple Control Display Unit
State of the Art
30 January 2015 HCI-AERO, November 3-5 2010 Page 6
7. • A Methodology is suggested that is composed of:
• Models (Traditional Widgets + AHMI specific UI elements)
• Language (User Interface Description Language)
• Method (Model-Based UI Design applied to the AHMI)
• Software support
• A Language Engineering approach
• Semantics as meta-models (UML class diagrams)
• Syntax as XML Schemas
• Stylistics is the visual syntax, in our context not used.
• A Structured Method compliant to the Cameleon Reference
Framework for UI development
• Just focus in the layer that concerns to the concrete description
model.
Model-Based AHMI Design
30 January 2015 HCI-AERO, November 3-5 2010 Page 7
8. 30 January 2015 Page 8
Task and
Domain Model
Model to
Model
Abstract UI
Model
Model to
Model
Concrete UI
Model
Code
Generation
Final UI
Control
Task and Domain Model
Physical Control Software Control
Physical interaction object 2D GUI 3D GUI
HCI-AERO, November 3-5 2010
9. 30 January 2015 Page 9
Task and
Domain Model
Model to
Model
Abstract UI
Model
Model to
Model
Concrete UI
Model
Code
Generation
Final UI
HCI-AERO, November 3-5 2010
10. 30 January 2015 Page 10
Task and
Domain Model
Model to
Model
Abstract UI
Model
Model to
Model
Concrete UI
Model
Code
Generation
Final UI
HCI-AERO, November 3-5 2010
11. • The Concrete UI Model (CUI) allows:
• Specification AHMI presentation and behaviour independently of
any programming toolkit
Model-Based AHMI Design
30 January 2015 HCI-AERO, November 3-5 2010 Page 11
X3D OpenGL
Task and
Domain Model
Model to
Model
Abstract UI
Model
Model to
Model
Concrete UI
Model
Code
Generation
Final UI
12. • Integrating evaluation in the loop of the design of the AHMI
imply the use of pilots and a simulator.
• Gonzalez Calleros, J.M., Vanderdonckt, J., Lüdtke, A., Osterloh, J.P., Towards Model-Based AHMI Automatic
Evaluation, In: Proc. of 1st Workshop on Human Human Modelling in Assisted Transportation (HMAT'2010),
Belgirate, Italy, June 30- July 2, 2010. Springer-Verlag, Berlin.
• Automated evaluation of
• Static aspects (UI layout, position of objects)
• Dynamic concepts (state of a button during the interaction, color of
the label).
• Any UI is represented in a UsiXML model submitted to
automated evaluation (automatic or manual)
• Usability guidelines over the UI objects (distribution of the
widgets composing the UI) are evaluated.
Benefits from Relying on a Model-Based
Development
30 January 2015 HCI-AERO, November 3-5 2010 Page 12
13. • Special attention was paid to those guidelines for standard
certification and quality assurance and to express them in
the Guideline Definition Language (GDL)
Evaluating the AHMI User
Interface
30 January 2015 HCI-AERO, November 3-5 2010 Page 13
http://www.usixml.org
http://www.w3.org/2005/Incubator/model-based-ui/
14. Integrating UI Evaluation in a
Simulation Environment
30 January 2015 HCI-AERO, November 3-5 2010 Page 14
19. 30 January 2015 Page 19
Evaluation-Workload
HCI-AERO, November 3-5 2010
20. 30 January 2015 Page 20
Evaluation-Execution
Workload
1.2 + 3x(1.5 + 0.075) + 1.5 = 7.4254.6 + 4x(4.6 + 2.2) = 31.8
Workload
Remember date
Select day
Select month
Select year
Execution Time
Select day
Select month
Select year
Validate
(Use mouse to point at object on screen
1.5 second)+Execute a mental “step”
.075 second
Retrieve a simple item from a long-term
memory
1.2 second
4.6
Evaluation/Judgment (consider single
aspect) + 2.2
Discrete Actuation (button, toggle, trigger)
HCI-AERO, November 3-5 2010
21. Some guidelines
• Cockpit display systems should at least be consistent with
systems of our daily life [Singer 2002]
• Usability guidelines from ISO 9126 could be evaluated
• Messages should follow always the nomenclature: first letter in
capital and the rest in lower case
• AHMI display systems such as the consistency in the roll index in
the compass rose [Singer 2001]
30 January 2015 HCI-AERO, November 3-5 2010 Page 21
22. • A model-based method for the development of the AHMI
was presented allowing:
• Structured development process (making design more explicit)
• UI evaluation
• Traditional measurements can be assessed like UI workload and
execution time,
• More complex automated evaluation based on guideines.
• Explore design options, for instance, modality of interaction of the
UI can be object of evaluation.
• The original 2D rendering can be equally rendered in 3D.
• A future plan is to automatically generate the AHMI from its model
and to submit it to run-time analysis.
Conclusion
30 January 2015 HCI-AERO, November 3-5 2010 Page 22
There are more than fifty direct actions that can be manipulated on the AHMI. As there is no significant difference on what it corresponds to UI objects and layout. We will restrict to one task, although, the rest of the UI can be generated by analogy. The task that we will focus is the generation of a new trajectory in the air (Figure 1). To generate a trajectory in the air, the user has to select a waypoint on the constraint list to which the aircraft shall fly directly and at which it shall intercept the constraint list. The AHMI automatically suggests a suitable waypoint that is written in a field above the DIRTO button, whenever the mouse pointer is moved over that button. By pressing on the field above the DIRTO button, the user accepts the suggestion (trigger suitable waypoint). After clicking on the waypoint or the field with the suggested waypoint’s name, a trajectory leading from the current position to the intercept point and from there on along the constraint list is generated (system tasks of the subtree create arbitrary trajectory). While the constraint list is shown as a blue line, the trajectory is shown now as a green dotted line.
To select another waypoint, the user simply has to click first on the DIRTO button (create waypoint) and then move the mouse onto the waypoint on the constraint list he wishes to select. The waypoint’s name is then marked in yellow and written on the DIRTO button (select arbitrary waypoint). Special attention must be take to the calculate trajectory feedback as more than once a WP can be selected then if one WP was selected a trajectory is proposed but if another WP is selected then the previous trajectory is deleted and the new proposed trajectory is drawn. After the trajectory has been generated, it can be negotiated with ATC simply by moving the mouse over the SEND TO ATC menu. A priority could be chosen during the negotiation process with ATC (select negotiation type). After selecting the negotiation type the system show the feedback from ATC about the trajectory.
Thereafter, even if the negotiation has failed, a click on ENGAGE! (trigger trajectory engage) activates the AFMS guidance, which generates aircraft control commands to guide the aircraft along the generated trajectory. The trajectory is then displayed as a solid green line (show trajectory). If the trajectory is approved by ATC and engaged, i.e. the AFMS guides the aircraft along that trajectory, the dark grey background of the trajectory changes to a bright grey one. One relevant aspect of relying on task models revealed a usability problem on the existing system. The current version of the AHMI allows pilots to trigger any of the three actions (select, negotiate and engage trajectory) without forcing a logical sequence of the tasks. Interaction objects are enabled even that they should not be. The task model structure and task model relationships assures, at some point, to consider the logical sequence of actions as constraints for the further concretization of the tasks.
There are more than fifty direct actions that can be manipulated on the AHMI. As there is no significant difference on what it corresponds to UI objects and layout. We will restrict to one task, although, the rest of the UI can be generated by analogy. The task that we will focus is the generation of a new trajectory in the air (Figure 1). To generate a trajectory in the air, the user has to select a waypoint on the constraint list to which the aircraft shall fly directly and at which it shall intercept the constraint list. The AHMI automatically suggests a suitable waypoint that is written in a field above the DIRTO button, whenever the mouse pointer is moved over that button. By pressing on the field above the DIRTO button, the user accepts the suggestion (trigger suitable waypoint). After clicking on the waypoint or the field with the suggested waypoint’s name, a trajectory leading from the current position to the intercept point and from there on along the constraint list is generated (system tasks of the subtree create arbitrary trajectory). While the constraint list is shown as a blue line, the trajectory is shown now as a green dotted line.
To select another waypoint, the user simply has to click first on the DIRTO button (create waypoint) and then move the mouse onto the waypoint on the constraint list he wishes to select. The waypoint’s name is then marked in yellow and written on the DIRTO button (select arbitrary waypoint). Special attention must be take to the calculate trajectory feedback as more than once a WP can be selected then if one WP was selected a trajectory is proposed but if another WP is selected then the previous trajectory is deleted and the new proposed trajectory is drawn. After the trajectory has been generated, it can be negotiated with ATC simply by moving the mouse over the SEND TO ATC menu. A priority could be chosen during the negotiation process with ATC (select negotiation type). After selecting the negotiation type the system show the feedback from ATC about the trajectory.
Thereafter, even if the negotiation has failed, a click on ENGAGE! (trigger trajectory engage) activates the AFMS guidance, which generates aircraft control commands to guide the aircraft along the generated trajectory. The trajectory is then displayed as a solid green line (show trajectory). If the trajectory is approved by ATC and engaged, i.e. the AFMS guides the aircraft along that trajectory, the dark grey background of the trajectory changes to a bright grey one. One relevant aspect of relying on task models revealed a usability problem on the existing system. The current version of the AHMI allows pilots to trigger any of the three actions (select, negotiate and engage trajectory) without forcing a logical sequence of the tasks. Interaction objects are enabled even that they should not be. The task model structure and task model relationships assures, at some point, to consider the logical sequence of actions as constraints for the further concretization of the tasks.
the Symbolic AHMI (SAHMI) architecture in the context of a virtual simulation platform is shown. A repository with UsiXML formalism describing the AHMI UI is used. This file is read using a parser that validates the specification and transforms this into a machine readable structure called model merger. The UI is complemented with dynamic and static data accessed via the simulation system. The Cognitive Architecture (CA) is used to simulate pilots’ interaction with the AHMI. More details on the CA or the experiments are out of the scope of this paper, they can be found in [9]. Simulated pilots actions over the UI are passed as messages that are processed in the model merger. These data from the simulation system must be transformed to be compatible with UsiXML format. This data is store as a log File history
The transformer module modifies the specification of the UI trying to test multiple configurations. For instance, in FigureC a combo box is used instead a menu (Figure B) for selecting the negotiation type with the ATC. Thus as result the UI timeline could be composed of different version of the UI to perform the same task. The first timeline corresponds (Figure B) to the real simulated system as it is. The second timeline and subsequent would be the result of investigating different renderings of the same UI over time. For instance in Figure the timeline B shows changes in the location of widgets (T1, T2, and Tn) and replacement of a widget (T3). The evaluation layer of the SAHMI keeps a trace of the evolution of the UI during the interaction. The Model Merger layer reconstructs the UsiXML and sends it to store it in the online evaluation tool.