Reliable Visual Analytics, a Prerequisite for Outcome Assessment of Engineering Systems

Keywords: reliable visual analytics, evaluation, verification and validation assessment, quality criteria, metrics


Various evaluation approaches exist for multi-purpose visual analytics (VA) frameworks. They are based on empirical studies in information visualization or on community activities, for example, VA Science and Technology Challenge (2006-2014) created as a community evaluation resource to 'decide upon the right metrics to use, and the appropriate implementation of those metrics including datasets and evaluators'. In this paper, we propose to use evaluated VA environments for computer-based processes or systems with the main goal of aligning user plans, system models and software results. For this purpose, trust in VA outcome should be established, which can be done by following the (meta-)design principles of a human-centered verification and validation assessment and also in dependence on users' task models and interaction styles, since the possibility to work with the visualization interactively is an integral part of VA. To define reliable VA, we point out various dimensions of reliability along with their quality criteria, requirements, attributes and metrics. Several software packages are used to illustrate the concepts.


Download data is not yet available.

Author Biographies

Wolfram Luther, University of Duisburg-Essen

Senior Professor at the Department of Computer Science and Applied Cognitive Science

Benjamin Weyers, University of Trier

Junior Professor at the Department IV

How to Cite
Luther, W., Auer, E., & Weyers, B. (2020). Reliable Visual Analytics, a Prerequisite for Outcome Assessment of Engineering Systems. Acta Cybernetica, 24(3), 287-314.
Uncertainty Modeling, Software, Verified Computing and Optimization

Most read articles by the same author(s)