Comparison of Lab- and Remote-Based Human Factors Validation – A Pilot Study

Open Access
Article
Conference Proceedings
Authors: Karoline JohnsenBernhard WandtnerMichael Thorwarth

Abstract: The possibility of conducting human factors validations remotely becomes increasingly important, not only due to the COVID-19 pandemic. However, there is a lack of research addressing the reliability of remotely obtained data in the field of medical products. Observability seems to be a key factor and has therefore be ensured in remote setups. This research focuses on producing and analyzing first data to compare lab-based and remote-based setups. The goal is to evaluate if and under which circumstances human factors validations of medical devices could be conducted remotely and which methodological aspects must be considered. In a simulated human factors validation / usability test, two lab-based and two remote-based conditions were investigated. The lab-based observer was present in the test room during the evaluation. Afterwards, the session’s recording could be reviewed as a second variant of the lab-based observation. The remote-based observer had the recording as a resource for observation only and the chance to review it afterwards as a second condition. The observations were based on a simulated human factors validation for two different medical products (device and software). The main basis for data analysis was an observation protocol in which the individual actions to be performed were categorized by the two observer groups according to classification derived from FDA’s Human Factors Guidance. Five human factors professionals in the lab-based and the remote-based setup respectively, with prior knowledge about both products in focus of the evaluation, generated the protocol data. The datasets from the lab-based and the remote-based observations were compared regarding their level of agreement. In addition, the quality of observations was assessed by comparing them to a sample solution, including the effect of the setups on the observers’ cognitive workload. Descriptively assessed, any-two agreement and Cohen´s κ calculations showed differences in observations of the lab-based vs. remote-based setup that became smaller when potentially critical actions were in focus. For the medical software less than 10% of the observations differed compared to around 15% for the medical device considering only critical use errors. The quality of observations was slightly higher when the observer was on-site, and better overall for the medical device compared to medical software regarding percentual agreement with the sample solution. Interestingly, a particularly high cognitive workload occurred when the medical device was observed remotely comparing the total NASA-TLX scores between the setups. Findings do not seem to strongly favor either lab-based or remote-based setups. For the medical device, the lab-based observation seemed to be more appropriate while for the medical software the result is not clear. However, remote observation performed better for the medical software than for the medical device. Observing the evaluation remotely and verifying the results with the help of video recordings detected the highest number of critical use errors. Overall, initial results from the feasibility study highlight the potential of remote evaluations. However, more research is needed to validate the results with a larger sample size and determine the influencing factors that might favor remote vs. lab-based approaches.

Keywords: Remote Usability Evaluation, Medical Device Usability Evaluation, Human Factors Validation, Usability Test

DOI: 10.54941/ahfe1002128

Cite this paper:

Downloads
125
Visits
186
Download