Improving Common Ground in Human-Machine Teaming: Dimensions, Gaps, and Priorities

Open Access
Article
Conference Proceedings
Authors: Robert WrayJames KirkJeremiah Folsom-Kovarik

Abstract: “Common ground” is the knowledge, facts, beliefs, etc. that are shared between participants in some joint activity. Much of human conversation concerns “grounding,” or ensuring that some assertion is actually shared between participants. Even for highly trained tasks, such teammates executing a military mission, each participant devotes attention to contributing new assertions, making adjustments based on the statements of others, offering potential repairs to resolve potential discrepancies in the common ground and so forth.In conversational interactions between humans and machines (or “agents”), this activity to build and to maintain a common ground is typically one-sided and fixed. It is one-sided because the human must do almost all the work of creating substantive common ground in the interaction. It is fixed because the agent does not adapt its understanding to what the human knows, prefers, and expects. Instead, the human must adapt to the agent. These limitations create burdensome cognitive demand, result in frustration and distrust in automation, and make the notion of an agent “teammate” seem an ambition far from reachable in today’s state-of-art. We are seeking to enable agents to more fully partner in building and maintaining common ground as well as to enable them to adapt their understanding of a joint activity. While “common ground” is often called out as a gap in human-machine teaming, there is not an extant, detailed analysis of the components of common ground and a mapping of these components to specific classes of functions (what specific agent capabilities is required to achieve common ground?) and deficits (what kinds of errors may arise when the functions are insufficient for a particular component of the common ground?). In this paper, we provide such an analysis, focusing on the requirements for human-machine teaming in a military context where interactions are task-oriented and generally well-trained.Drawing on the literature of human communication, we identify the components of information included in common ground. We identify three main axes: the temporal dimension of common ground and personal and communal common ground. The analysis further subdivides these distinctions, differentiating between aspects of the common ground such as personal history between participants, norms and expectations based on those norms, and the extent to which actions taken by participants in a human-machine interaction context are “public” events or not. Within each dimension, we also provide examples of specific issues that may arise due to problems due to lack of common ground related to a specific dimension. The analysis thus defines, at a more granular level than existing analyses, how specific categories of deficits in shared knowledge or processing differences manifests in misalignment in shared understanding. The paper both identifies specific challenges and prioritizes them according to acuteness of need. In other words, not all of the gaps require immediate attention to improve human-machine interaction. Further, the solution to specific issues may sometimes depend on solutions to other issues. As a consequence, this analysis facilitates greater understanding of how to attack issues in misalignment in both the nearer- and longer-terms.

Keywords: human-machine interaction, common ground, mental models

DOI: 10.54941/ahfe1001463

Cite this paper:

Downloads
242
Visits
462
Download