The importance of assessing both expert and non-expert populations to inform expert performance
Open Access
Article
Conference Proceedings
Authors: Stephen Mitroff, Emma Siritzky, Samoni Nag, Patrick Cox, Chloe Callahan-Flintoft, Andrew Tweedell, Dwight Kravitz, Kelvin Oie
Abstract: Realizing the benefits of research for human factors applications requires that academic theory and applied research in operational environments work in tandem, each informing the other. Mechanistic theories about cognitive processing gain insight from incorporating information from practical applications. Likewise, human factors implementations require an understanding of the underlying nature of the human operators that will be using those very implementations. This interplay holds great promise, but is too often thwarted by information from one side not flowing to the other. On one hand, basic researchers are often reluctant to accept research findings from complex environments and a relatively small number of highly-specialized participants. On the other hand, industry decision makers are often reluctant to believe results from simplified testing environments using non-expert research participants. The argument put forward here is that both types of data are fundamentally important, and explicit efforts should bring them together into unified and integrated research programs. Moreover, effectively understanding expert performance requires assessing non-expert populations.For many fields, it is critically important to understand how operators (e.g., radiologists, aviation security officers, military personnel) perform in their professional setting. Extensive research has explored a breadth of factors that can improve, or hinder, operators’ success, however, the vast majority of these research endeavors hit the same roadblock—it is practically difficult to test specialized operators. They can be hard to gain access to, have limited availability, and sometimes there just are not enough of them to conduct the needed research. Therefore, non-expert populations can provide a much-needed resource. Specifically, it can be highly useful to create a closed-loop ecosystem wherein an idea rooted in an applied realm (e.g., radiologists are more likely to miss an abnormality if they just found another abnormality) is explored with non-experts (e.g., undergraduate students) to affordably and extensively explore a number of theoretical and mechanistic possibilities. Then, the most promising candidate outcomes can be brought back to the expert population for further testing. With such a process, researchers can explore possible ideas with the more accessible population and then only use the specialized population with vetted research paradigms and questions.While such closed-looped research practices offer a way to best use available resources, the argument here is also that it is necessary to assess non-experts to fully understand expert performance. That is, even if researchers have full access to a large number of experts, they still need to test non-experts. Specifically, assessing non-experts allows for quantifying fundamentally important factors, such as strategic vs. perceptual drivers of performance and the time course of learning. Many of the potential gains in the applied sphere come from selecting the best people to train into becoming experts; without non-expert performance it is impossible to know how to enact that selection or to divorce the effects of extensive practice and expertise from the operational environment. While there has been an, at times, adversarial relationship between research practices that use non-expert vs. expert participants, the proposal here is that embracing both is vital for fully understanding the nature of expert performance.
Keywords: expert performance, participant recruitment, research practices
DOI: 10.54941/ahfe1001486
Cite this paper:
Downloads
156
Visits
368