To Shoot or Not to Shoot? Human, Robot, & Automated Voice Directive Compliance with Target Acquisition & Engagement
Open Access
Article
Conference Proceedings
Authors: Giovanna Camacho, Matthew Bolton, Joseph Loggi, Kallia Smith, Emmett Rice, Tariq Iqbal
Abstract: The Army’s Optionally Manned Fighter Vehicle (OMFV) program seeks to, “...operate with no more than two crewmen”(Congressional Research Service, 2021), but currently uses four individuals: driver, gunner, commander, and ammo handler. This study sought to investigate how automated teammates affects warfighters within the tank. To achieve our research objective, we performed a human subjects’ study under IRB ID 5734, from the University of Virginia. This experiment was a mixed measures design as all participants were tasked to take directives from three entities, but half of the participants were given directives by a female voice while the other half were given a male voice from all entities. Participants were tasked to take commands from a human, NAO robot, and a computer automated voice while deciding on whether to fire upon armed robots, swarm of drones, or a single drone. They engaged targets by use of a computer mouse. Participants were instructed that the commands given to them might not be correct and it was upon their judgment if the target was indeed a necessary target. The entire experiment took approximately 30 minutes in total as there were 54 iterations where participants were given 20 seconds to respond with a click totaling 18 minutes that left them with 12 minutes where they completed a demographic survey, NASA TLX, SART, gave subjective feedback, and were briefed and debriefed. Data was analyzed using mixed linear model ANOVAs. Overall, army participants preferred instruction from a human. Less experienced users completely ignored all directives given and proceeded to engage as they saw fit. Individuals given directives from the computer had lower accuracy and situational awareness (SA) scores. Individuals directed by the computer had lower workload scores than they did being directed from a human, but higher workload scores than when directed by the robot. Human directed participants had a higher workload and situational awareness scores. Higher accuracy scores were seen in target acquisition, but not in target engagement for individuals directed by the robot. Participants receiving directives from the robot had the lowest workload score on average and had a moderate SA score. Participants never looked at the robot during the experiment once it began, as they were task saturated with their vision fixated on their targets while listening for commands. Participants felt the least workload from the robot but had moderate frustration with the robot and the highest frustration with the computer automated directives. There were significant differences found between the computer and robot directives when it came to SA (F(2,26) =3.48, p<.046, np2 = .211). There were also significant differences between the accuracy target engagement scores of the beginner and experienced participants (F(2,22) =3.83, p<.037, np2 = .258). There were no differences in how participants responded between male/female directives voices. Furthermore, the robot utilized did not show preference to male/female directives either to initiate mission directives. Ultimately, data produced in this study will help understand how to best facilitate operator performance with or without Human Automated Teammates.
Keywords: human robot interaction (HRI), human agent teaming (HAT) & situation awareness
DOI: 10.54941/ahfe1003948
Cite this paper:
Downloads
235
Visits
937