Generating Visual Arguments: a Media-independent Approach - Robotics Institute Carnegie Mellon University

Generating Visual Arguments: a Media-independent Approach

Nancy Green, Stefan Kerpedjiev, and Steven F. Roth
Workshop Paper, AAAI '98 Workshop on Representations for Multi-modal Human-Computer Interaction, July, 1998

Abstract

The research reported here is part of our ongoing effort (Kerpedjiev et al. 1997b; 1997a; Green, Carenini, g5 Moore 1998; Green et al. 1998; Kerpedjiev et al. 1998) to design systems that can automatically generate integrated text and information graphics presentations of complex, quantitative data. In this paper, we take the position that certain types of arguments that can be presented visually in information graphics (e.g., bar charts and scatter plots) can be generated from an underlying media-independent representation of a presentation. In support of this claim, first we briefly describe the architecture we are developing for the generation of integrated text and information graphics presentations. In this architecture, mediaindependent communicative acts are transformed into user task specifications which are the basis for the automatic design of the presentation’s graphics. Then we present an example showing correspondences between the media-independent representation of an argument and the tasks that would be used to design a graphic expressing the argument.

BibTeX

@workshop{Green-1998-14713,
author = {Nancy Green and Stefan Kerpedjiev and Steven F. Roth},
title = {Generating Visual Arguments: a Media-independent Approach},
booktitle = {Proceedings of AAAI '98 Workshop on Representations for Multi-modal Human-Computer Interaction},
year = {1998},
month = {July},
}