Can Machine Learning be a Good Teammate?
Authors: Leslie Blaha, Megan Morris
Abstract: We hypothesize that successful human-machine learning teaming requires machine learning to be a good teammate. However, little is understood about what the important design factors are for creating technology that people perceive to be good teammates. In a recent survey study, data from over 1,100 users of commercially available smart technology rated characteristics of teammates. Results indicate that across several categories of technology, a good teammate must (1) be reliable, competent and communicative, (2) build human-like relationships with the user, (3) perform their own tasks, pick up the slack, and help when someone is overloaded, (4) learn to aid and support a user’s cognitive abilities, (5) offer polite explanations and be transparent in their behaviors, (6) have common, helpful goals, and (7) act in a predictable manner. Interestingly, but not surprisingly, the degree of importance given to these various characteristics varies by several individual differences in the participants, including their agreeableness, propensity to trust technology, and tendency to be an early technology adopter. In this paper, we explore the implications of these good teammate characteristics and individual differences in the design of machine learning algorithms and their user interfaces. Machine learners, particularly if coupled with interactive learning or adaptive interface design, may be able to tailor themselves or their interactions to align with what individual users perceive to be important characteristics. This has the potential to promote more reliance and common ground. While this sounds promising, it may also risk overreliance or misunderstanding between a system’s actual capabilities and the user’s perceived capabilities. We begin to lay out the possible design space considerations for building good machine learning teammates.
Keywords: Human, Machine Teaming, Teammate, likeness, Good Teammates, Machine Learning, Explainable Artificial Intelligence, Human, Autonomy Teaming
Cite this paper: