Human Factors in Cybersecurity

Editors: Abbas Moallem, Kitty Kioskli
Topics: Human Factors in Cybersecurity
Publication Date: 2025
ISBN: 978-1-964867-44-1
DOI: 10.54941/ahfe1005979
Articles
Human Factors and Strategic Approaches in Cybersecurity: Threats for Critical Infrastructures in NIS2 Dοmains
In 2024, the intensity and frequency of cyber attacks reached unprecedented levels worldwide, with organizations experiencing a notable 28% increase in weekly incidents compared to late 2023. This sharp escalation brings with it severe financial consequences, with global cybercrime losses projected to soar to $13.82 trillion by 2028. Of particular concern are attacks targeting critical national infrastructure (CNI), where the interconnectedness brought about by Industry 4.0 and accelerated digitalization has significantly broadened the attack surface, exposing essential services to elevated risks. Criminal groups and state-sponsored entities are increasingly exploiting these vulnerabilities, with motives ranging from financial gain to strategic disruption of essential societal functions. This evolving threat landscape underscores the critical importance of the NIS2 Directive, implemented in 2024 to bolster Europe’s cyber resilience by expanding the regulatory framework and enforcing baseline security measures across key sectors, creating a more uniform and robust defense against cyber threats. The healthcare sector, in particular, faces unique cybersecurity challenges due to the sensitive nature of patient data and the rapid adoption of digital health technologies, such as electronic health records (EHRs) and Internet of Medical Things (IoMT) devices. These advances make healthcare infrastructure especially vulnerable to cyber attacks, including ransomware, phishing, and data breaches. The increase in digital touchpoints introduces new entry points for attackers, who can exploit weak security policies, poorly configured devices, and limited cyber readiness. Concurrently, emerging technologies such as generative AI and quantum computing present further complexities. Given these intersecting and evolving risks, this paper aims to provide a comprehensive narrative literature review of cybersecurity threats affecting critical infrastructure, healthcare systems, and advanced digital technologies, with a strong emphasis on proactive and adaptive strategies to mitigate these challenges. Theoretically, this study enriches the field of cybersecurity by synthesizing current research on vulnerability frameworks across diverse industries, presenting a holistic view of the threat landscape and emerging security needs. It bridges gaps in the literature by examining the interplay between policy measures, technological advancements, and security challenges within these sectors. Practically, this study identifies actionable strategies for securing critical systems, with particular attention to regulatory compliance and the need for proactive cybersecurity measures. It translates high-level research into practical insights, providing guidance on real-world applications, such as the operational impact of the NIS2 Directive in Europe and the importance of quantum-safe cryptographic standards. By doing so, the paper equips stakeholders—government agencies, corporate leaders, and IT security teams—with groundwork for navigating the evolving cybersecurity landscape and developing resilient systems. Through this dual theoretical and practical focus, the paper aims to not only expand academic understanding but also empower professionals to implement informed, adaptive, and robust cybersecurity strategies.
Kitty Kioskli, Leandros Maglaras, Theofanis Fotis, Emmanuel Varouchas
Open Access
Article
Conference Proceedings
Charting Trustworthiness: A Socio-Technical Perspective on AI and Human Factors
Integrating AI into critical decision-making environments, including cybersecurity, highlights the importance of understanding human factors in fostering trust and ensuring safe human-AI collaboration. Existing research emphasizes that personality traits, such as openness, trust propensity, and affinity for technology, significantly influence user interaction with AI systems, impacting trustworthiness and reliance behaviours. Furthermore, studies in cybersecurity underscore the socio-technical nature of threats, with human behaviour contributing to a significant portion of breaches. Addressing these insights, the study discusses the development and validation of a questionnaire designed to assess personality-driven factors in AI trustworthiness, advancing tools to mitigate human-centric risks in cybersecurity. Building on interdisciplinary foundations from cyberpsychology, human-computer interaction, and behavioural sciences, the questionnaire evaluates dimensions including ethical responsibility, collaboration, technical competence, and adaptability. Subject matter experts systematically reviewed items to ensure face and content validity, reflecting theoretical and empirical insights from prior studies on human behaviour and cybersecurity resilience. The tool’s scoring system employs weighted Likert-scale responses, enabling detailed evaluations of trust dynamics and identifying key areas for intervention. By bridging theoretical and applied perspectives, this research contributes to advancing the role of human factors in cybersecurity, offering actionable insights for the design of trustworthy AI systems and calibrated trust practices.
Theofanis Fotis, Kitty Kioskli, Eleni Seralidou
Open Access
Article
Conference Proceedings
Exploring How College Students’ Mental Models of Cybersecurity Threats Predict Cyber Knowledge and Hygiene
The role of human performance is critical in cybersecurity. Cybersecurity professionals and other employees must respond to unanticipated events successfully to maintain safety. Aviation safety has benefited from decades of human factors research to understand the role of threats and human error. Unfortunately, our present understanding of how to train people to respond effectively to cyber threats remains limited. The goal of this study is to investigate the relationship between threat understanding and engagement in behaviors that increase security, called cyber hygiene. Prior research suggested that an ability to recognize latent threats was associated with performance on a situational judgement test. In the current exploratory, descriptive study, the aim was to replicate that result in the domain of cybersecurity. The results of two studies suggest an association between cybersecurity knowledge and mental models of cyber hygiene but do not offer conclusions about the relationship between mental models and cyber hygiene behavior.
David Schuster
Open Access
Article
Conference Proceedings
Leveraging Complex Access Scenarios (CAS) to Bridge Human-Centered HCI
Within the realm of human-computer interaction (HCI), the shift from traditional stimulus-response models to more integrated human-computer partnerships mark a significant technological evolution, which is coined as human-centered AI (HCAI). This shift is driven by the advent of autonomous agents and AI, which transform HCI from simple interactions to complex integrations where systems anticipate user needs and collaborate effectively. This integration challenges us to design systems that are not only efficient and safe but also intuitive, aligning closely with human behavior and expectations. Addressing these challenges brings about the opportunity to focus on technical objectives that are crucial in shaping the future of HCI to effectively incorporate HCAI. In this work, Complex Access Scenarios (CAS) are leveraged to not only reveal system complexity but also to propose a method to bridge HCI-to-HCAI as ‘Human-Centered HCI’.
Rahmira Rufus
Open Access
Article
Conference Proceedings
Towards Scalable Solutions of Operational Technology Cybersecurity in Smart Energy Networks
During the last years, operational technology cybersecurity threat landscape has become wider, due to the increase of digitalization, more sophisticated cyberattacks and increase of ransomware. Dependence on energy and information networking and operational technology inevitably exposes smart energy networks to potential vulnerabilities associated with networking systems. This increases the risk of compromising reliable and secure use of them. Network intrusion by adversaries may lead to a variety of severe consequences from customer information leakage to a cascade of failures, such as massive blackout and destruction of critical infrastructures. Cybersecurity should be considered as core business enabler for smart energy networks. In energy solutions, sector integration means integrating various energy sectors to electricity transfer networks. This increases overall complexity of the electricity networks but it also enables to balance out each other’s peaks in consumption and generation, with benefits towards carbon-neutral and flexible energy system. Cyber secure digital platforms will be the key to manage this increasing complexity driving a sustainable energy transition.We introduce a cybersecurity system integration reference model to cover the common cybersecurity solutions, processes and architecture for operational technology environments. The model has been validated in several experimental implementations. The model will enable establishment of common and standardized capabilities towards creation of competitive advantage in the global business in securing industrial automation. The model covers common architecture, interoperation, processes, tools and requirements, including the essential information for OT cybersecurity improvement, and SOC service up-scaling. The security infrastructure may include unnecessary or multiple actions or it may be configured inefficiently. The aim of is to find out more effective configuration. This includes removal of legacy software and devices, consolidating external connections to internal network, grouping assets, defining allowed actions, listing allowed applications, and simplifying processes to decrease false positive alarms.It is obvious that a novel cybersecurity governance model for the sector integrated smart energy networks is required, driven by knowledge of risks, vulnerabilities, threats, assets, potential attack impacts, and the motives and targets of potential adversaries. Traditional reactive approach to cybersecurity strategy is no longer effective, nor is it defensible. The focus will be in best secure and resilient governance practices in sector integration, maintenance and processes, handling of security requirements, risks, objects and measures and management of multiparty operations. The governance model is validated in an experimental laboratory environment for an energy production system. Secure sector integration sets a lot of requirements for cybersecurity and OT, policies, and management. An energy production system needs to fill the requirements with validated functionalities, such as cybersecurity and operation controls. Functionalities are distributed to internal and external domains (on-site and Security Operations Center, SOC). Subsystems of the smart energy network are connected to the SOC by wired or wireless connections. The SOC can use common procedures and processes for different kind of operations. This enables automation of the continuous cybersecurity monitoring, along with AI techniques, making the SOCs as correlation points for every logged event within the sector connected energy production system and overall smart energy network system.
Reijo Savola
Open Access
Article
Conference Proceedings
Threats and Security Strategies for IoMT Infusion Pumps
The integration of the Internet of Medical Things (IoMT) into healthcare systems has transformed patient care by enabling real-time monitoring, enhanced diagnostics, and enhanced operational efficiency. However, this increased connectivity has also expanded the attack surface for cybercriminals, raising significant cybersecurity and privacy concerns. This study focuses on the cybersecurity vulnerabilities of IoMT infusion pumps, which are critical devices in modern healthcare. Through a targeted literature review of the past five years, we analyzed seven current studies from a pool of 132 papers to identify security vulnerabilities. Our findings indicate that infusion pumps face vulnerabilities—such as device-level flaws, authentication and access control issues, network and communication weaknesses, data security and privacy risks, and operational or organizational challenges—that can expose them to lateral attacks within healthcare networks. Our analysis synthesizes findings from seven recent studies to clarify how and why infusion pumps remain vulnerable in each of these areas. By categorizing the security gaps, we highlight critical risk patterns and their implications. This work underscores the scope of the issue and provides a structured understanding that is valuable for healthcare IT professionals and device manufacturers. Ultimately, the findings can inform the development of targeted, proactive security strategies to better safeguard infusion pumps and protect patient well-being.
Ramazan Yener, Muhammad Hassan, Masooda Bashir
Open Access
Article
Conference Proceedings
Analysis of Large Language and Instance-Based Learning Models in Mimicking Human Cyber-Attack Strategies in HackIT Simulator
Understanding human strategies in cyber-attacks is essential for advancing cybersecurity defense mechanisms. However, the ability of computational cognitive and artificial intelligence (AI) models to effectively replicate and predict human decision-making in realistic cyber-attack scenarios remains underexplored. This study addresses this gap by evaluating the performance of two distinct models—Instance-Based Learning (IBL; cognitive model) and a Large Language Model (LLM, GPT-4o; AI model)—in mimicking human cyber-attack strategies using the HackIT simulation tool.The experiment employed a 2 × 2 design varying network topology (Bus vs. Hybrid) and network size (Small: 40 nodes vs. Large: 500 nodes), involving 84 randomly assigned participants (42 teams) across four conditions: Hybrid 40 (24 participants, 12 teams), Hybrid 500 (22 participants, 11 teams), Bus 40 (18 participants, 9 teams), and Bus 500 (20 participants, 10 teams). Participants collaborated in pairs over 10-minute sessions to attack networks consisting of an equal mix of honeypots (50% fake systems) and real systems (50% regular systems). Attack strategies included scanning for vulnerabilities with nmap followed by exploiting identified weaknesses through HackIT. Both human and model performance was evaluated on three dependent variables: total systems exploited, total honeypots exploited, and total real systems exploited. Eighty percent of the human data was used for model training, and 20% for testing.The IBL model, calibrated with ACT-R cognitive architecture parameters (decay and noise ranging from 0.1 to 3), closely mirrored human behavior across conditions and excelled in distinguishing honeypots from real systems exploited, especially in smaller networks. For instance, in the "Bus 40" condition, the IBL model achieved a lower mean squared error (MSE = 0.0576) compared to human participants in honeypot exploitation. Similarly, the IBL model outperformed in detecting honeypots across conditions, demonstrating its ability to replicate complex cognitive processes.The GPT-4o model showed exceptional flexibility, especially in smaller networks, after being adjusted for temperature (0.5, 1, 1.5) and top-k sampling (2, 3, 4). For instance, GPT-4o demonstrated equivalent performance in the "Bus 40" condition, exploiting 19 systems with an MSE of 1.000 in comparison to the 20 systems used by human participants. In real-system exploitation, it demonstrated its capacity to scale and dynamically modify tactics, consistently achieving excellent accuracy across configurations.The IBL model offered more profound insights into cognitive decision-making processes, while GPT-4o was superior at making use of real systems and adjusting to complicated situations, according to model validation using HackIT simulations. Both models showed complementary strengths, with GPT-4o doing exceptionally well in whole and real-system exploits and IBL providing excellent honeypot detection.By using cognitive and AI-based models to replicate human attacker activities across various network setups, our study closes a significant knowledge gap. The findings highlight the usefulness of IBL in revealing the cognitive foundations of decision-making and the scalability of GPT-4o for complicated scenarios. When combined, these models provide a strong basis for simulating hostile tactics, locating weaknesses, and bolstering defenses in contemporary cybersecurity settings.
Shubham Sharma, Shubham Thakur, Megha Sharma, Ranik Goyal, Shashank Uttrani, Harsh Katakwar, Kuldeep Singh, Palvi Aggarwal, Varun Dutt
Open Access
Article
Conference Proceedings
Resolving Conflicts Between PSIRT and Safety Teams: A Collaborative Approach
The need to meet safety and security simultaneously is increasing in industrial control systems (ICS) and industrial robots, where network connectivity is rapidly expanding. However, the "safety first" culture that has taken root in many companies has put security requirements on the back burner, and there is a structure prone to conflicts between the two domains. In this study, the authors elucidate the conflict factors in the safety and security life cycle and propose a new collaborative framework based on the knowledge creation theory (SECI model, Ba, knowledge assets) of Nonaka et al. We conducted semi-structured interviews and qualitative analysis of five Japanese Industrial product suppliers. In the interview, we highlighted potential and actual conflicts between the product safety and security teams (e.g., PSIRT: Product Security Incident Response Team). In this paper, we proposed a resolution model for conflicts by dealing with cultural and cognitive gaps among experts from the perspective of human factors. We hope this model improves risk management in various industries and under cybersecurity laws and regulations amid tight regulations worldwide, such as the EU Cyber Resilience Act.
Jumpei Tahara, Kenji Watanabe, Ichiro Koshijima, Ryushun Oka
Open Access
Article
Conference Proceedings
Assessing and Communicating Software Security: Enhancing Software Product Health with Architectural Threat Analysis
Assessing and communicating software security has become a crucial concern in the era of digital transformation. As software systems grow more complex and interconnected, it becomes increasingly challenging to effectively evaluate and communicate a product's security status to both technical and non-technical stakeholders. The Software Product Health Assistant (SPHA) is designed to automatically collect and aggregate data from existing expert tools and derive, among other scores, a transparent Security Score. SPHA is designed to present and explain this Security Score to decision-makers to support their responsibilities. In this paper, we demonstrate how to integrate data from SMARAGD (System Modeler for Architectural Risk Assessment and Guidance on Defenses), a safety-informed threat modeling tool, into SPHA to enhance the existing definition of its Security Score. To achieve this, we combine information about known vulnerabilities with architectural and threat data to calculate a realistic risk score for the product in question.
Jan-niclas Strüwer, Roman Trentinaglia, Benedict Wohlers, Eric Bodden, Roman Dumitrescu
Open Access
Article
Conference Proceedings
Next Generation BCM solution
Variety of cyber-attacks against critical infrastructure, and disinformation campaigns have been growing rapidly in recent years and is forecasted to continue on current growth path. The number of cyber incidents in industrial enterprises doubled from 2019 to 2020 and they have continued increasing in similar rate. At 2023, the global average cost of a single data breach was 4,45 million USD. Currently cyber threat detection solutions are monitoring critical systems and responds to odd incidents. Obviously, they are in reactive mode and are not dealing with core problem itself, just patching holes. In order to decrease number of attacks on critical infrastructures and societies, threat detection systems must evolve to prevent such incidents. Therefore, preventative threat management is needed to enhance capabilities of existing Business Continuity Management (BCM) solutions. The research question of the paper is to define requirements for next generation BCM solution. Future BCM systems must evolve beyond traditional disaster recovery to having a proactive, predictive, and threat data sharing capabilities. Next generation BCM solutions must utilize advanced technologies, ensure regulatory compliance, and provide organizations with the tools to anticipate and prevent threats. Automated and comprehensive threat data collection solution is basic foundation for any threat prevention and analysis system. Internet of Things (IoT) technologies are to be utilized for massive amount of real-time data collection. Artificial intelligence (AI) is excellent tool for both massive amount of threat data analysis and collection. Blockchain technology should be used for secure and transparent distribution of threat data to further improve detection and responding to new attack patterns, malware, or vulnerabilities before they can exploit critical systems to greater extend. This is critical since newly discovered weaknesses must be blocked quickly to prevent greater damage. Digital Twin solutions are excellent for simulating, further refining and optimizing organizations BCM strategies. Quantum computing can be seen as a significant risk for BCM solutions since modern cryptography relies on difficulty of solving certain mathematical problems, which are hard for classical computers but could be solved quickly by a quantum computer. Fortunately, quantum computing can also improve threat prevention solutions by detecting threats faster and enabling new enhanced cryptographic solutions resistant to both classical and quantum attacks.
Markus Sihvonen, Riku Lehkonen, Arttu Takala
Open Access
Article
Conference Proceedings
Investigating Human Factors Engineering Integration in ATC Cybersecurity Resilience
The digital transformation of Air Traffic Control (ATC) systems has improved operational efficiency and safety. However, increased reliance on technology has introduced significant cybersecurity vulnerabilities. While current cybersecurity strategies often focus on technical defenses, they tend to overlook the critical role of human operators, particularly air traffic controllers (ATCOs), in ensuring system resilience against cyber threats. ATCOs are the primary users of the advanced technology. Failing to account for their cognitive and physical limitations in cybersecurity solutions can lead to cognitive overload, reduced situation awareness (SA), increased error rates, and fatigue, ultimately compromising the effectiveness of technical safeguards. Human Factors Engineering (HFE) offers a valuable approach by optimizing human-system interaction and accounting for user characteristics, capabilities and limitations in complex, high-risk environments like ATC. This study explores the integration of HFE principles into ATC cybersecurity protocols to enhance system resilience. Using an exploratory qualitative approach, it synthesizes insights from scholarly literature, government reports, case studies, and industry best practices to propose a conceptual framework for HFE integration in ATC cybersecurity. Five key HFE principles, including user-centered design, error reduction, safety prioritization, accommodation of individual differences, and task-person fit, are identified as essential for supporting ATCOs in cyber threat detection, decision-making, and system interaction. Findings highlight that HFE-informed designs, such as intuitive interfaces, adaptive alerts, ergonomic workstations, and tailored training, can reduce cognitive workload, improve SA, and support ATCO performance during cyber-attacks.This study underscores the need to integrate HFE into aviation cybersecurity, promoting a holistic approach that acknowledges and promotes human capability and usability. It offers insights for enhancing both technology and human reliability against evolving threats and contributes to the growing discourse on human-centered cybersecurity, laying the groundwork for future research on quantifying HFE’s impact in ATC environments.
Hui Wang, Nathan Schultz
Open Access
Article
Conference Proceedings