Human Factors in Cybersecurity

book-cover

Editors: Abbas Moallem

Topics: Human Factors in Cybersecurity

Publication Date: 2024

ISBN: 978-1-964867-03-8

DOI: 10.54941/ahfe1004759

Articles

Using DESM to demonstrate how behavior can impact an enterprise's physical attack surface structure

This paper addresses behaviors affecting the attack surface structure of a simulated enterprise model. The work conducted identifies human factors that contribute to the enterprise’s physical attack surface. Such factors can be social engineering, phishing, insider threats, inadequate employee awareness and training (AET), etc. By leveraging a Descriptive Enterprise System Model (DESM), we demonstrate how behavior impacts the enterprise's physical attack surface structure. The focus in this phase of the research is associating human factors that contribute to this condition. The model is leveraged to make two observations: (1) isolate behavior functionally as a factor impacting the enterprise’s physical attack surface and (2) isolate human factors as an indicator of an enterprise’s behavior.

Rahmira Rufus, Jeff Greer, Ulku Clark, Geoffrey Stoker
Open Access
Article
Conference Proceedings

Proposing a DESM-based analytical framework for the enterprise cyber defender

This paper proposes an analytical framework for the next generation of cybersecurity architecture and strategy to assist the enterprise cyber defender. We built an enterprise system model for practitioner use by leveraging representative enterprises as critical infrastructure operators to achieve a learning objective. The learning objective is to assist the cyber defender with developing the enterprise cybersecurity architecture and strategy via the framework. The focus is to investigate an awareness, education, and training (AET) approach aimed at human factors concerning the role of the enterprise cybersecurity architect, where one architectural perspective is concerned with the successful operation of the enterprise. In contrast, the other is focused on the operation not failing. The goal is to identify the cybersecurity practitioner’s progress outcomes via a process prescribed by the Descriptive Enterprise System Model (DESM) as an adapted analytical framework for cybersecurity architect utilization (Clark et al., 2023). The objective is to 1st utilize the framework’s three-tiered structure and 2nd that the process targets resolving Crume’s three key factors for cybersecurity architecture roles and tools: (1) understanding how the system operates, (2) what is the potential for failure, and (3) what is the threshold to circumvent failure (Crume, 2023)?

Rahmira Rufus, Jeff Greer, Ulku Clark, Geoffrey Stoker, Thomas Johnston
Open Access
Article
Conference Proceedings

Interactive virtual learning environment to develop next-generation cybersecurity practitioner competency

This paper groups simulated behavioral, technical and operational elements of a ‘real enterprise’ for cybersecurity awareness, education and training (AET) evaluation. The research goal is developing next-generation cybersecurity practitioner competency congruent to behavioral and socioeconomic aspects of the next generation of computing. Within the cybersecurity knowledge domain, the modern digital enterprise is the system of interest (SOI) that requires enterprise cybersecurity execution to ensure security fitness based upon system state criteria. For this scope of the research, the enterprise is simulated via a web interface engineered to focus on the human entity being the key indicator to the success or failure of the enterprise’s security posture. The virtual learning interface is the application domain called the Integrated Virtual Learning Environment for Cybersecurity (IVLE4C). The objective is to leverage ILVE4C as a tool to increase practitioner proficiency, as currently there is tremendous investment focused on secure enterprise digitalization for the next generation of computing. However, there is no specialized engineering workstation for this type of platform. Utilization of the ILVE4C workstation is intended to provide such a platform to enhance the development and efforts of enterprise cyber defenders to address the reduction of this learning curve and to improve this human attack surface factor within the AET space.

Rahmira Rufus, Jeff Greer, Ulku Clark, Geoffrey Stoker, Thomas Johnston
Open Access
Article
Conference Proceedings

Biometric Authentication for the Mitigation of Human Risk on a Social Network

The increasing reliance on digital systems in today's interconnected world has brought about a corresponding surge in cyber threats, making cybersecurity a critical concern. While technological advancements have bolstered the defense mechanisms, human factors remain a significant vulnerability. This paper explores the intersection of human factors and cybersecurity, focusing on how biometric authentication can serve as a potent mitigating strategy. The human element in cybersecurity encompasses a range of factors, including user behavior, cognitive biases, and susceptibility to social engineering attacks. Understanding and addressing these aspects is crucial for developing robust and effective cybersecurity measures. Traditional methods such as passwords and PINs, which heavily rely on user memory, are inherently vulnerable to human error, leading to weak access controls and unauthorized access. One key advantage of biometrics is the inherent difficulty in replicating or forging an individual's unique characteristics. Unlike passwords that can be forgotten, shared, or stolen, biometric traits are inherently tied to an individual, providing a more reliable means of authentication. Moreover, the seamless integration of biometrics into daily activities reduces the cognitive burden on users, potentially leading to increased compliance with security protocols.Biometric authentication presents a promising avenue for overcoming the limitations associated with traditional methods. By leveraging unique physiological characteristics, biometrics offer a more secure and user-friendly approach to identity verification. This paper proposes a one-time facial recognition system in conjunction with an online social network, where individuals belonging to the network have their own server participating in the WebID protocol. The WebID protocol enables control of individual identity and representing a network of individuals in a decentralized web of trust. A social network with the WebID protocol consists of trusted individuals, and acceptance can be done through a voting scheme where individuals must be able to vouch for a new member. Controlling the member population of a network can help to prevent against phishing attacks, by restricting communications to only members of the social network. However, this is not a perfect system, and biometrics can be used as an added layer of security to prevent successful attacks spurred on by human factors.Replacing traditional passwords with biometrics can help to mitigate social engineering attacks, though human privacy is still an important consideration for many individuals. Biometrics can compromise privacy, and we propose a scheme to represent biometrics in a one-time fashion that can still preserve a high recognition rate for accurate acceptance/rejection of individual verification. This is done using a combination of the Local Binary Patterns feature extraction technique with evolutionary computation techniques to evolve unique feature extractors (to be used one-time) that also maintain accurate recognition rates. Prior results have shown this technique to be effective on preliminary datasets, the work done in this paper will show the effectiveness of this technique in a social network combined with the WebID protocol to prevent successful cyber-attacks spawned from human error. Additionally, we’ll discuss ways privacy can be compromised, and how the one-time disposable biometrics system can preserve privacy.

Aldrewvonte Jackson, Kofi Kyei, Yasmin Eady, Brian Dowtin, Bernard Aldrich, Albert Esterline, Joseph Shelton
Open Access
Article
Conference Proceedings

Measuring How Appropriate Individuals Are for Specific Jobs in a Network of Collaborators

We simulate social networks, where undirected edges are mutual friendships, to find the effect of their structure on the aptness of persons for performing a given job. A job J requires a given set of tasks, and each node (person) n can perform a given set of tasks. If the ego network EG of n cannot perform all tasks for J, then n fails on J. Otherwise, n’s score is computed as a weighted sum of measures of centrality, embeddedness (core number), attribute and degree assortativity of the nodes in EG, the degrees of these nodes, and the performance of these nodes on accuracy, speed, and reliability. Experiments were run on random networks from three models across values for an independent variable controlling the number of edges: Erdős-Renyi (ER), Barabasi-Albert (BA), and Watts-Strogatz (WS). Average values for maximum, average, and minimum node scores for each value of the variable for each model were plotted. For all models, the core-number measure largely accounts for the curves’ shapes. Our core-number measure averages over node n’s core number and the averages of n’s neighbors’ numbers and the smallest of these. For ER networks, scores increase with increasing number of edges as nodes become more embedded. For BA and WS networks, there is an initial decrease, conjectured to depend on a person collaborating with many little-embedded helpers, untested and perhaps not well trusted. Our approach for members’ aptness for jobs preserves the security of a secure community, keeping the calculations within the community.

Yasmin Eady, Kofi Kyei, Aldrewvonte Jackson, Bernard Aldrich, Brian Dowtin, Joseph Shelton, Albert Esterline
Open Access
Article
Conference Proceedings

A Notion of Trustworthiness Based on Centrality in a Social Network

We develop a measure of trustworthiness for members of a social network that supports collaborative effort in a domain. Edges represent explicitly declared friendships. The measure for a person is the geometric mean of their betweenness and eigenvector centralities in their network. The focus is on ranking people according to these values, which are normalized. We show the rankings of the people in an Erdős-Renyi (ER) network according to our measures. In experiments on Barabasi-Albert (BA) and Watts-Strogatz (WS) as well as ER networks, the average differences between the maximum and the minimum trustworthiness of the people are plotted against the independent variable of each model that results in an increasing number of edges. For the ER and WS networks, this difference decreased significantly and nearly linearly vs. the independent variable, but the trustworthiness values increase: it is harder to distinguish the trustworthy from the not trustworthy when all are pretty trustworthy. For the BA networks in contrast, this spread decreased to a minimum then increased. It is conjectured that, with an increasing number of edges, how embedded hubs are in the network becomes a dominant factor while non- hubs remain not very embedded. This measure has been used in defining a protocol for a group using a distributed authentication protocol to decide whether to admit a candidate, an example of how our work provides a secure way for people to collaborate that exploits human characteristics.

Brian Dowtin, Kofi Kyei, Yasmin Eady, Aldrewvonte Jackson, Bernard Aldrich, Joseph Shelton, Albert Esterline
Open Access
Article
Conference Proceedings

Towards a Human-Centric AI Trustworthiness Risk Management Framework

Artificial Intelligence (AI) aims to replicate human behavior in socio-technical systems, with a strong focus on AI engineering to replace human decision-making. However, an overemphasis on AI system autonomy can lead to bias, unfair, non-ethical decisions, and thus a lack of trust, resulting in decreased performance, motivation, and competitiveness. To mitigate these AI threats, developers are incorporating ethical considerations, often with input from ethicists, and using technical tools like IBM's Fairness 360 and Google's What-If tool to assess and improve fairness in AI systems. These efforts aim to create more trustworthy and equitable AI technologies. Building trustworthiness in AI technology does not necessarily imply that the human user will fundamentally trust it. For humans to use technology trust must be present, something challenging when AI lacks a permanent/stable physical embodiment. It is also important to ensure humans do not over-trust resulting in AI misuse. Trustworthiness should be assessed in relation to human acceptance, performance, satisfaction, and empowerment to make design choices that grant them ultimate control over AI systems, and the extent to which the technology meets the business context of the socio-technical system where it's used. For AI to be perceived as trustworthy, it must also align with the legal, moral, ethical principles, and behavioral patterns of its human users, whilst also considering the organizational responsibility and liability associated with the socio-technical system's business objectives. Commitment to incorporating these principles to create secure and effective decision support AI systems will offer a competitive advantage to organizations that integrate them.Based on this need, the proposed framework is a synthesis of research from diverse disciplines (cybersecurity, social and behavioral sciences, ethics) designed to ensure the trustworthiness of AI-driven hybrid decision support while accommodating the specific decision support needs and trust of human users. Additionally, it aims to align with the key performance indicators of the socio-technical environment where it operates. This framework serves to empower AI system developers, business leaders offering AI-based services, as well as AI system users, such as educators, professionals, and policymakers, in achieving a more absolute form of human-AI trustworthiness. It can also be used by security defenders to make fair decisions during AI incident handling. Our framework extends the proposed NIST AI Risk Management Framework (AI-RFM) since at all stages of the trustworthiness risk management dynamic cycle (threat assessment, impact assessment, risk assessment, risk mitigation), human users are considered (e.g., their morals, ethics, behavior, IT maturity) as well as the primary business objectives of the AI socio-technical system under assessment. Co-creation and human experiment processes must accompany all stages of system management and are therefore part of the proposed framework. This interaction facilitates the execution of continuous trustworthiness improvement processes. During each cycle of trustworthiness risk mitigation, human user assessment will take place, leading to the identification of corrective actions and additional mitigation activities to be implemented before the next improvement cycle. Thus, the main objective of this framework is to help build ‘trustworthy’ AI systems that are ultimately trusted by their users.

Kitty Kioskli, Laura Bishop, Nineta Polemi, Antonis Ramfos
Open Access
Article
Conference Proceedings

Does penalty help people learn to detect phishing emails?

Phishing attacks are increasingly prevalent and pose a significant threat to organizations worldwide. Many organizations implement phishing training programs to educate employees on how to recognize and avoid phishing attacks. Incentives are often used in these training programs to motivate employees to participate and engage with the material. However, the impact of incentives on the effectiveness of these training programs is not well understood. Similarly, how often such training should be provided, remains an additional factor in improving detection ability. Past research has provided evidence that frequency impacts the susceptibility to phishing emails. However, the interaction of frequency and incentives in phishing training is not well known. Key questions persist: Do individuals exhibit greater attention and motivation to detect phishing emails when penalties are imposed? How does exposure to more phishing emails contribute to evading penalties? This paper manipulates the frequency of phishing emails during the training phase and incentive structure for classifying emails. Experiments were conducted using a Phishing Training Task (PTT) i.e. an interactive software platform that emulates key tasks associated with email response decision making to test the impact of learning factors on phishing detection. The results indicate that imposing penalties for incorrect decisions does not have a significant effect on the detection performance for most of the conditions. Thus, our results suggest providing a symmetric incentive structure may not improve the phishing detection ability. These findings highlight the importance of experimenting with additional incentive structures in phishing training programs. This paper will provide guidelines to use cognitive models to design effective incentive structures.

Kuldeep Singh, Palvi Aggarwal, Cleotilde Gonzalez
Open Access
Article
Conference Proceedings

A survey of agent-based modeling for cybersecurity

Cybersecurity is gaining an increasing focus as a necessary foundation for the safe and secure digitalization of modern society – but the expanding interconnectedness of systems, organizations, and people in society is leading to an intricately entangled web of challenges that traditional cybersecurity models struggle to manage. Even when addressing security only from a technical level, the interactions resulting from the total number of installed software on the computers of a medium-sized organization, internally as well as connected through networks, and their related vulnerabilities present almost unsurmountable computational challenges. When adding organizational security management procedures, shifting cyber-attack strategies, and increasing dependence on third parties such as cloud providers and communication networks, it becomes evident that organizational cybersecurity is a complex problem. Researchers both outside and inside the cybersecurity domain have called for addressing this increasing complexity through the lens of “complex adaptive systems.” The term complex adaptive systems is used differently by different researchers and in various fields but is usually understood as systems consisting of dynamic interacting agents, acting in parallel, with the ability to react to the environment and other agents and to adapt and learn from their interaction, giving rise to emergent behavior. Agent-based modeling (ABMs) has become a powerful tool for studying such systems. ABMs are a type of computational modeling that simulates the actions and interactions of autonomous agents to assess their effect on the system as a whole by modeling from the bottom up, starting with the individual agents. ABMs in cybersecurity must be developed and used correctly and properly. Cybersecurity researchers and practitioners wishing to include ABM in their toolbox should follow established best practices regarding model conceptualization, building, and validation. To support this, we have surveyed existing ABM applications in cybersecurity to identify and discuss challenges and weaknesses and identify areas of improvement and new possibilities. Drawing on the existing literature, we identify what problems in cybersecurity ABMs may be utilized on and what challenges may arise, and discuss suggestions for best practices drawing on experiences from other fields where ABMs have been used successfully, including areas such as political science and policy management, ecological systems and market models. We describe the reasons researchers give for choosing to employ ABMs, identify the main types of applications, and identify the leading tools, software, and frameworks that have been used in developing ABMs in the cyber domain. Finally, we discuss weaknesses in existing approaches and suggest areas of improvement for building well-grounded, robust, and validated cybersecurity models and simulations. Finally, we also discuss new possibilities for ABM-based research incorporating sensor-based systems and big data processing and a better understanding of human agents in cyber-security.

Arnstein Vestad, Bian Yang
Open Access
Article
Conference Proceedings

Mental Firewall Breached: Leveraging Cognitive Biases for Enhanced Cybersecurity

Humans commonly use heuristics (“mental shortcuts”) to make rapid decisions. While these processes are efficient, they can produce systematic errors, referred to as cognitive biases, that can lead to decrements in task performance. To explore whether triggers for cognitive biases might be employed to interfere with a cyber-attacker’s decision making, and if so to what extent, the current study used a simplified, game-like cyber-attack scenario on a vulnerable banking application. Specifically, we examined the effects of two manipulations of anchoring bias and asymmetric dominance effects in a 2x2x3 between-subjects design. Across 10 “attack rounds” or blocks, 196 participants encountered at least 5 and no more than 10 bank accounts serving as experimental trials. Participants decided to either “steal” the money from each account, or “skip” the account. Actions selected resulted in an increase to their probability of being detected on subsequent trials, but the probabilities were not revealed to the participants. To induce a potential anchoring effect, information received in pre-task instruction was manipulated to provide either an arbitrary but specific number of bank accounts they could attack before detection, or vague instruction cautioning against stealing from “too many” accounts. Additionally, values associated with the initial bank account presented on each attack round varied, including both very high, standard, and very low values, to determine how anchoring effects from those amounts influenced subsequent decisions. To capture asymmetric dominance, participants also selected a choice of potential systems with either two (Asymmetry Absent) or three (Asymmetry Present) options. After finishing the experimental task, participants performed the Balloon Analogue Risk Task (BART) to explore if risk taking behavior was associated with any key aspects of performance. Findings suggest an instructional anchor at the start of the session did not affect the number of times participants were detected, nor did it influence the average amount of money they stole from accounts. We did find evidence to support the impact of account value anchors in both the average amount of money stolen and the number of times participants got caught, however, these behaviors were only significantly impacted by the account value anchor in the absence of an instructional anchor. These findings show that cognitive biases can influence decision making in this task, but their effects are mitigated when a bias is manipulated concurrently from different sources. Asymmetric dominance effects were only found in the conditions that were given a specific instructional anchor as part of the anchoring manipulation, which might reflect that the order of the instructional content and attack selection played a role in attention. Analysis of participants’ behavior on the BART test supported the notion that the presence of specific information in a task produced behaviors related to risk-taking propensity. Overall, these findings offer some proof of concept for the potential use of cognitive biases to influence and detect cyber attacker behaviors, but also suggest a level of caution is appropriate when seeking to integrate multiple biases into cyber contexts. Other findings and potential implications are discussed.

Rebecca Pharmer, Rosa Martey, Giovanna Henery, Ethan Myers, Indrakshi Ray, Benjamin Clegg
Open Access
Article
Conference Proceedings

Analyzing important factors in cybersecurity incidents using table-top exercise

In recent years, the threat of cyber-attacks has been increasing yearly. Various organizations should take countermeasures for it. In the face of increasing threats, organizations need to take not only technical measures but also human countermeasures. However, cyber-attacks themselves are becoming more sophisticated, so it is important for organizations to prepare countermeasures and organizational structures based on the assumption that incidents due to cyber-attacks will occur. Moreover organizations are required to minimize the damage caused by cyber-attack incidents and continue their business operations.This study focused on human countermeasures especially organizational structures, designed an incident response exercise, and conducted it with approximately 60 members of a critical infrastructure company in Japan. Based on the records of the exercise and the results of the post-exercise questionnaire, these results examine organizational and human barriers that organizations may face in incident response and the organizational structure that minimizes the damage from incidents. The incident response exercise was based on a scenario in which a hypothetical local infrastructure company was infected with ransomware and could not fulfill its role as a local infrastructure. The roles of management, IT department, and upper-level managers and personnel in the field departments were defined, and how incident response would be conducted from each position was examined. The exercise was recorded chronologically using the chronology used in disaster recovery, and the instructions given by whom and to whom were organized in chronological order so that the participants could look back on the details of their responses after the exercise. A questionnaire survey was conducted after the exercise, and the exercise itself received a high evaluation, with an average score of 4 or higher out of 5. In addition, information on important items in incident response, including changes before and after the exercise, was collected through free-response statements. Context-based evaluation and analysis of the collected results revealed what members of the Japanese critical infrastructure community consider important in incident response. Furthermore, from the contents recorded in chronology during the exercise, the process of escalation and decision-making to management and upper management was analyzed to identify barriers such as delays in reporting and decision-making that may lead to the expansion of incident damage. In addition, based on the results of these analyses, we will deepen our thinking and make recommendations on the organizational structure and transfer of authority for rapid incident response.

Kenta Nakayama, Ichiro Koshijima, Kenji Watanabe
Open Access
Article
Conference Proceedings

Discovering Cognitive Biases in Cyber Attackers’ Network Exploitation Activities: A Case Study

Understanding a cyber attacker's behavior can help improve cyber defenses. However, significant research is needed to learn about attackers’ decision-making processes. For example, some advancement has been made in understanding attackers’ decision biases and the potential that measuring such biases would have for cyber defenses. However, currently, there are no publicly available datasets that could be used to learn about attackers' cognitive biases. New research is needed to provide clear metrics of attacker cognitive biases in professional red teamers, using testbeds that represent realistic cybersecurity scenarios. New studies should go beyond exploratory observations and rely on formal metrics of cognitive biases that can use the actions taken by the adversaries (i.e., rely on what adversaries "do" more than what they "say") and be able to demonstrate how defense strategies can be informed by such attacker biases. In this paper, we start to build upon existing work to demonstrate that we can detect and measure professional red teamers' cognitive biases based on the actions they take in a realistic Advanced Persistent Threat (APT) scenario. We designed a cybersecurity scenario in which an attacker would execute an APT-style attack campaign. The goal for the attacker was to obtain sensitive documents from the target network. To achieve this goal, human attackers were asked to perform network reconnaissance, laterally move to hosts and gain access to the relevant systems, and finally, perform data exfiltration as a post-exploitation task. We used the CyberVAN testbed for our experimentation. CyberVAN is a flexible cyber range that offers a high-fidelity representation of heterogeneous network environments. CyberVAN supports a Human-in-the-loop (HITL) capability that allows participants to remotely log into a VM in a network scenario and interact with other VMs in that scenario. For our experimentation, we designed a network in CyberVAN to enable a multi-step attack campaign wherein participants were required to make decisions at each step in order to progress toward the goal. The network was divided into three levels to represent the different stages of the attack campaign. Participants were provided necessary tools to scan the network, to crack passwords and exploit vulnerabilities. Attackers start their activities from the attacker host, a designated host external to the target network. At level 1 their goal is to gain unauthorized access to one of five hosts by cracking the passwords of valid users on the system. Once attackers successfully log in to a host at level 1, they pivot to a host at level 2 by remotely exploiting security vulnerabilities present in that host. The host was configured with real services containing known vulnerabilities that are remotely exploitable. At level 2, the attacker’s goal is to gain access to the target host at level 3 and exfiltrate as many files as possible from the target machine. From level 2, attackers are given two options to execute the attack: (i) an open-source tool that is reliable but requires additional effort to set up and execute, and (ii) a prepared shell script that is unreliable (small probability of success) but easy to execute. Upon compromising the target host, the final action is to exfiltrate as many files as possible from the host to an external drop site. For exfiltration, attackers choose between standard file transfer applications such as SCP and FTP. Attackers were periodically informed that the network defenders might be monitoring the network and that they might be detected at any stage of the task. If detected, attackers were returned to the previous step and had to perform the task again by choosing a different host/credential/exploit. Results provided evidence of default effect bias, availability bias, and recency bias. Participants chose the first or the last IP address from the network scan result, representing an indication of default effect bias. We also observed that participants preferred simple/easy-to-execute options over complex and reliable options indicative of complexity aversion. Similarly, we observe that recently discovered vulnerabilities were exploited 67% of the time although they only made up 50% of the available vulnerabilities indicative of recency bias. This paper provides initial evidence to identify the cognitive biases and behaviors in cyberattackers.

Palvi Aggarwal, Sridhar Venkatesan, Jason Youzwak, Ritu Chadha, Cleotilde Gonzalez
Open Access
Article
Conference Proceedings

Exploring User Perspectives on Prioritizing Security through Software Updates

Security vulnerabilities can put users at risk if they do not promptly install necessary security updates. To minimize risk, software developers regularly release security updates that address known or potential vulnerabilities. However, previous studies have revealed numerous reasons why users may not adopt software updates. Additionally, the National Vulnerability Database (NVD) demonstrated that not all types of software are equally vulnerable to security breaches. Therefore, this study investigates users' perceptions of software updates while delving into the complex realm of human behavior, uncovering which type of software users prioritize when considering updates. This study also explores to what extent the users trust these software updates.To gain a comprehensive understanding of users' perspectives on software updates, we conducted a survey consisting of questions designed to uncover valuable insights into individual behaviors, attitudes, and preferences related to performing software updates. The questionnaire featured a list of seven categories of software, such as web browsers, multimedia players, and antivirus software. The participants ranked their preferred software categories for security updates. Our survey asked users about their trust in software updates for improving security. We collected user attitudes towards software updates to offer insights to developers, analysts, and users. Out of the 63 volunteers, 48 provided complete responses for us to analyze. The group had a nearly equal split of males and females (54.17% and 45.83%, respectively), with most being between 26 and 34 years old and having a higher level of education. All participants spent at least one hour per day on the computer.Our analysis shows that around 29% of the respondents prioritize antivirus updates when making decisions about which categories of software to update for security. Additionally, approximately one quarter (26%) prioritize updates to the operating system, and approximately one in five respondents identify web browsers as significant for maintaining a secure infrastructure. Notably, only 3.52% of the participants consider multimedia software updates important. We also observed that around half of the respondents (48%) believe that updating software can enhance the security of their system. However, these users do not fully trust on software updates. In contrast, 16% of users rarely or never rely on software updates. Moreover, approximately 40% of users have had negative experiences and were hesitant to apply software updates, which is likely a significant reason for their reluctance to depend on software updates.In conclusion, these findings highlight user preferences and factors that influence their decisions regarding which software categories they prioritize for updates based on security considerations. Users prioritize software that is essential or requires updates to run the system, such as OS updates. Furthermore, many users do not believe that updates can improve security due to past negative experiences. Achieving higher adoption rates of software updates remains an open challenge due to a persistent lack of trust. To improve security through software updates, it is not enough to progress only on the technological front; it is also essential to develop more effective strategies to make the updates reliable and win the trust of users.

Mahzabin Tamanna, Joseph Stephens, Abdolhossein Sarrafzadeh, Mohd Anwar
Open Access
Article
Conference Proceedings

Planning the perfect heist: An adversarial cyber game

This paper introduces "Heist: An Adversarial Cyber Security Board Game", designed to enhance cyber security knowledge through interactive gameplay. Players engage in asymmetrical team-based play, simulating a 'cyber heist' on a sci-fi hotel. The unique setup integrates technical, social, and organisational strategies, enabling diverse cyber security approaches using a deck-building mechanic.Heist development emphasised CyBOK knowledge areas, resulting in core mechanics focused on deck building, promoting critical thinking and collaboration. Players deploy specialists to attack or defend, with attackers aiming to tarnish the hotel's reputation while the defender seeks to identify them through digital evidence. The game strikes a balance between strategy and learning, broadening participation in cyber security and deepening players' understanding of tactics.Playtesting sessions informed refinements, enhancing educational impact and entertainment value. Heist exemplifies an innovative approach to cyber security education, merging theory and practical application in an immersive board game format. It showcases the potential of educational games for complex subjects like cyber security.

Oliver Buckley, Jake Montanarini, Helen Quinlan
Open Access
Article
Conference Proceedings

The disPHISHinformation Game: Creating a Serious Game to Fight Phishing Using Blended Design Approaches

In 2022, 39% of all UK businesses reported identifying a cyber security attack against their own organisation, 83% of which were phishing attempts. A large body of research in cyber security focuses on technical solutions, however humans remain one of the most exploitable endpoints in an organisation. Traditional security training within organisations commonly includes point-and-click exercises and simple video media that employees are required to complete. These training exercises are often seen as unengaging and tedious, and employees are commonly pushed to complete training rather than encouraged to learn and self-educate. Simulations and games are increasingly being deployed for training purposes in organisations, however often either (a) simply raise cyber security awareness rather than deliver key security policy and content, or (b) lack accessibility with complex game pieces and rules not easily understandable by those not accustomed to playing games. We introduce the disPHISHinformation game: a customisable serious game to deliver phishing training specific to the threats businesses face on a day-to-day basis. Drawing on existing taxonomies, the game delivers content on email, voice, and SMS social engineering attacks, in a format that educates players in key social engineering features. In collaboration with a large service organisation, we have also developed a customised edition of disPHISHinformation game which reflects the targeted attacks faced by their staff. By creating an analog serious game to deliver key phishing training, we can stimulate higher employee engagement and deliver a more memorable experience.

Niklas Henderson, Helen Pallett, Sander Van Der Linden, Jake Montanarini, Oliver Buckley
Open Access
Article
Conference Proceedings

Cracking the Code: A Cyber Security Escape Room as an Innovative Training and Learning Approach

This project explores the unique potential of physical escape rooms to foster embodied learning of cyber hygiene practices for the general public, addressing the challenges of traditional methods in engaging learners. It begins with a comprehensive review of existing training methodologies, highlighting their limitations, and underlines the necessity for more interactive learning experiences due to the increasing complexity of cyber threats. The core idea revolves around using escape rooms as educational and training tools, combining immersive, interactive elements with key cybersecurity principles to foster engagement and enhance retention. The paper includes a framework for integrating cybersecurity into escape room scenarios, discussing aspects like storyline development, puzzle design, and the inclusion of real-world cybersecurity challenges, while maintaining a balance between learning and gameplay. The conclusion presents initial findings on the effectiveness of escape rooms in cybersecurity education and training, showing positive impacts on engagement and behaviour, and suggests further research to refine this method.

Tash Buckley, Oliver Buckley
Open Access
Article
Conference Proceedings

Gamification: 20 years on, what have we learned?

Gamification has gained significant traction and attention over the last decade, though the term goes back over two decades, and the application of the principles themselves likely pre-date the term itself. Establishing a widely accepted definition of gamification and classification of the underpinning principles is still ongoing; this paper considers three proposed models, and their unique contributions to research into gamification, particularly in the cyber environment.We then examine a case study in cyber-security gamification, assessing the performance of the Cyber Explorer’s programme – a UK government sponsored initiative, which aims to utilise gamification to enhance cyber security education in UK based 11–14-year-olds. We examine how the various gamification principles have been applied, their effectiveness and implications for cyber security education.The resulting analysis and discussion highlight a need for more research into the effectiveness of gamification in sub-populations, to examine the impact of gamification elements on learning effectiveness rather than motivation, and to identify the specific gamification mechanisms which are most effective in the cyber security learning arena.

Holly Aldred
Open Access
Article
Conference Proceedings

Practising Safe Sex(t): Developing a Serious Game to Tackle Technology-Facilitated Sexual Violence

Modern society relies on the Internet for socialisation, entertainment, and business, whilst the COVID-19 pandemic has expedited the digitalisation of many services. Heightened incidences of cybercrime have accompanied increased Internet usage, including acts of technology-facilitated sexual violence (TFSV). Measures to prevent further TFSV victims are limited, and growing pressures on law enforcement mean few support resources are available. This paper presents an innovative game-based mitigation for TFSV education. We developed a serious game in the form of an online visual novel, with each chapter revolving around an aspect of TFSV. Pre and post-game surveys were conducted with 45 participants to explore their experience with the game and understanding of TFSV. The findings highlight that games-based interventions have the potential to act as an effective tool against TFSV. The broader implications of the work focus on suggestions for law enforcement and the role of games-based mitigations to reduce victimisation.

Tia Cotton, Lynsay Shepherd
Open Access
Article
Conference Proceedings

Integrating Human Factors into Data-driven Threat Management for Overall Security Enhancement

Human and other non-technological issues are often overlooked, which directly and indirectly contributing to many successful cyber attacks, including DoS, social engineering, download-driven attacks, and more. Considering human issues as causes for internal threats and weaknesses, a deeper understanding of these factors is essential for overall security enhancement. Therefore, organizations of all sizes need to ensure a broad range of knowledge, skills, and awareness among all user levels, from individual end-users to security practitioners. However, this task is challenging due to the evolving nature of business, systems, and threat contexts. To address this challenge, our research represents a significant advancement in holistic and comprehensive threat assessments, surpassing existing practices by considering pertinent human factors. Our approach views humans as potential weaknesses or threats, influenced by various factors. Specifically, it incorporates key human elements, such as motivation, knowledge, context, and privilege, into the threat management process to enhance overall security. These factors are systematically classified and interconnected, facilitating the identification of weaknesses and threats posed by humans within the system context. For example, depending on the context, privilege can be categorized into three levels: organizational, departmental, and unprivileged, with end-user privileges falling into these classifications. Knowledge, as a human factor in this approach, is differentiated into technological and security awareness. Our proposed approach extends data-driven threat modeling by integrating human factors to identify and assess threats related to these factors. We present a conceptual model that combines human factors with cybersecurity concepts, including data, assets, threats, weaknesses, and controls, to assess and manage threats associated with human factors and evaluated from both insider weaknesses and threat perspectives. This contributes significantly to overall security enhancement, including improving the accuracy of threat assessments, identifying new threats, and developing more effective threat mitigation strategies.

Mohammed Alwaheidi, Shareeful Islam, Spyridon Papastergiou, Kitty Kioskli
Open Access
Article
Conference Proceedings

The human factor impact on a Supply Chain Tracking Service through a Risk Assessment Methodology

In the rapidly evolving landscape of supply chain (SC) management, the importance of tracking services in overseeing the lifecycle from production to sale cannot be overstated. These services rely on sophisticated systems that monitor vital condition information such as temperature and humidity. However, beyond the technical and mechanical aspects, human factors play a critical role in the operational integrity of these systems. This paper introduces a novel risk assessment methodology for SC tracking, emphasizing human error alongside technological and security risks, and integrates motivation and contribution aspects into the SC risk assessment framework.Our methodology is comprehensive, exploring the strategic business and technical requirements of SC tracking systems. It uniquely extends to assess the frequency, nature, and impact of human errors, alongside considering technological aspects. We investigate how human factors interact with elements such as IoT, cloud services, and standard IT systems, potentially leading to security vulnerabilities and operational inefficiencies. By mapping these human-centric risks to key operational components, we provide a comprehensive view of potential threats in SC tracking.In an environment where standardization efforts in SC risk assessment methodologies are ongoing, our work identifies the necessity for more specialized techniques, particularly those addressing security risks related to tracking and monitoring systems. Given the distributed nature and internet connectivity of these systems, they are inherently susceptible to numerous security challenges, predominantly involving their technological equipment. This underscores the imperative for a targeted risk assessment methodology focusing on the security risks of SC tracking systems, particularly in the context of traceability services and the monitoring of the state of assets in transit.Employing well-known risk assessment standards and threat modeling guides, our methodology scrutinizes targeted IT components used in SC tracking systems, their technical characteristics, and realistic threat agents in the SC ecosystem. We aim to evaluate whether security attacks originating from SC-specific threat agents result in tangible security risks against targeted hardware and software components within SC networks.To validate our methodology, we present a proof of concept application based on a real-case scenario. This demonstration highlights the versatility of our methodology to accommodate various SC scenarios, such as food, pharmaceuticals, and cold supply chains. The primary advantage of our proposed methodology lies in its ability to integrate risk estimation with the technological attributes of typical SC tracking systems and their operational requirements, which may vary based on the type of goods and services involved.In conclusion, this paper addresses the broader challenges in developing and implementing smart SC tracking systems, with a special emphasis on the integration of human error in these technologically advanced environments. We underscore the significant influence of human factors on the reliability and security of SC tracking systems and the cost implications and potential operational disruptions caused by human factors, thereby highlighting their pivotal role in the overall effectiveness and security of SC ecosystems.

Dimitris Koutras, Kitty Kioskli, Panayiotis Kotzanikolaou
Open Access
Article
Conference Proceedings

A Self-Organized Swarm Intelligence Solution for Healthcare ICT Security

The healthcare sector has undergone significant transformation in recent years, driven by the adoption of advanced medical technologies like IoT, Cloud Computing, and Big Data. This evolution began with the integration of electronic health records and has expanded to encompass a wide range of digital tools, from medical apps to wearables. These technological advancements have played a crucial role in enhancing patient experiences and outcomes. As healthcare technology has become increasingly interconnected, both physically and in the cyber realm, it has evolved into vast Health Care Information Infrastructures (HCIIs). These HCIIs are of paramount importance due to their critical role in people's well-being and safety. Any disruption, whether through direct actions like medical errors, or indirect actions such as altering patient records can have severe consequences for patient health. Currently, HCIIs are vulnerable because they often rely on isolated cybersecurity products. There is a pressing need to establish a comprehensive security strategy that can coordinate various security components to detect system vulnerabilities and sophisticated attacks. To address this complex challenge, it is essential to break down cybersecurity concerns in the healthcare sector based on the criticality of their assets. Prioritizing emerging solutions in this manner will help mitigate the complexity of the problem. Cyberattacks on the healthcare sector have become increasingly sophisticated and involve not only technical vulnerabilities but also social engineering tactics that exploit individuals with limited technical knowledge. European health and cybersecurity experts must collaborate to develop policies and standards that elevate security maturity throughout the EU. Ultimately, cybersecurity solutions in healthcare should not only enhance security but also have a positive business impact, enabling new services, collaborations, and market opportunities. The proposed solution in this study, represents a state-of-the-art approach to enhancing cybersecurity within HCIIs. It improves the detection and analysis of cyber threats and increases awareness of privacy and security risks in the digital healthcare ecosystem. By providing a Dynamic Situational Awareness Framework, the solution empowers stakeholders in the healthcare sector to recognize, model, and respond to cyber risks, including advanced persistent threats and daily cybersecurity incidents. Additionally, it facilitates the secure exchange of incident-related information aiming to strengthen the security and resilience of modern digital healthcare systems and the associated medical supply chain services. The proposed solution extends the frontiers of various research fields, including security engineering, privacy engineering, and artificial intelligence. Drawing inspiration from biological swarm formations, it brings together these disciplines to empower stakeholders in digital healthcare ecosystems. This leads to the creation of a highly interconnected and advanced intelligence system, comprised of simple nodes or groups of nodes, enabling local interactions and management of healthcare environments. By employing bio-inspired techniques and large-group decision-making models, the framework enhances communication and coordination in complex, distributed networks typical of interconnected healthcare infrastructures. It prioritizes scalability and fault-tolerance, allowing coordinated actions without a central coordinator. This approach streamlines investigation activities within healthcare ecosystems, fostering dynamic intelligence and collective decision-making, even when individual nodes lack a complete view of the situation.

Kitty Kioskli, Spyridon Papastergiou, Theofanis Fotis, Stefano Silvestri, Haralambos Mouratidis
Open Access
Article
Conference Proceedings

Development of Approach for Improving Cybersecurity Governance for Factory systems

As the digitization of factory systems progresses with mutual digital connections among them, cybersecurity risks throughout the supply chain also increase. In fact, there have been many cyber incidents where factories have stopped due to damage from ransomware. For large companies, it is possible to secure the budget and personnel for cybersecurity, including outsourcing. However, almost all small and mediums enterprises (SMEs) are facing with the difficulties to secure them.In this paper, we focus on how to improve governance for factory systems because our previous research revealed that it is the most critical challenge for SMEs to reduce cybersecurity risk of factory systems.In previous works, we developed an easier risk assessment tool consisting only 32 requirements based on Japanese government guidelines for factory systems. As a web tool survey result from 225 factory sites, more than 80% of SMBs found it inadequate to mitigate cybersecurity risks. We categorized the cybersecurity risks into the four factors which are “People”, “Process”, “Technology” and supply chain management of assets in the factory automation system (FA SCM). Some common results derived from the follow-up interviews show “People” factor consisting of governance and awareness is the root obstacle of the insufficient measures. So, we decided to clarify how to improve “People” factor for SMEs.To achieve it, we need to overcome two common challenges from our interview’s analysis below:- No risk assessment in the factory systems for common understanding the risk posture among the stakeholders (Executives, IT people, factory people)- No governance organization structureUsually, they installed some measures along the existing guidelines without considering. It causes that they become a mere shell. Our approach improves the failure.To consider the first challenge for SMEs, we developed the easy risk assessment workshop for factory people inspired by “Consequence-driven, Cyber-informed, Engineering” by Idaho National Lab which is originally developed for the engineering of critical infrastructure systems, because it has very simple concept for starting from the impact of the most undesirable events such as explosion, loss of quality control and production outage which can be easily understood by SMEs with insufficient cybersecurity knowledge. We conducted the workshop for people from some factory sites and succeeded to clarify the business risks in the factory systems.The second challenge is also important for “People” factor because SMEs need to build the management system for mitigating the risks derived from the workshop continuously.We applied the COBIT5 governance framework for enterprise IT to the management system for factory systems. The beauty of COBIT5 is the separation of governance and management. We used the concept for factory systems and determined a reference architecture of organization arranged to the roles in the normal and emergency state.In conclusion, we developed the effective approach improving governance for factory systems for SMEs. Our tools will be available in GitHub soon after the paper published.We plan to continue to consider how SMEs improve their cybersecurity readiness along the 32 items of Japanese guidelines.

Hiroshi Sasaki, Kenji Watanabe, Ichiro Koshijima
Open Access
Article
Conference Proceedings

Human Factors and Cybersecurity in NHS Virtual Wards

The rapid evolution of healthcare technology, particularly in the wake of the COVID-19 pandemic, has seen a significant rise in the set-up and expansion of Virtual Wards by the National Health Service (NHS) in the United Kingdom. Virtual wards (also known as hospital@home) allow patients to get hospital-level care at home safely and in familiar surroundings, helping speed up their recovery while freeing up hospital beds for patients who need them most. Patients are reviewed daily by the clinical team and the ‘ward round’ may involve a home visit or take place through video technology. Many virtual wards use technology like apps, wearables and other medical devices enabling clinical staff to easily check in and monitor the person’s recovery. This paradigm shift, while revolutionary in extending healthcare services to patients remotely and out of the hospital, brings with it cybersecurity challenges sourced by the new infrastructure and directly relevant to the involved stakeholder’s human factors.The presentation will describe the context of NHS virtual wards explore the user interface design, usability, and accessibility of virtual ward technologies, and discuss how these factors impact both patients and healthcare professionals. Particular attention will be paid to the challenges faced by diverse patient groups, including the elderly and those with disabilities, in navigating virtual healthcare environments and how these characteristics affect the vulnerabilities of virtual ward technologies, from a human factors point of view. An examination of regulatory frameworks and standards, the role of patients and staff training in cybersecurity awareness, and the integration of advanced security measures within these new healthcare infrastructures. The presentation will discuss the importance of conceptualising the human-centric approach in maintaining and promoting cyber hygiene in Virtual Wards and propose a multi-disciplinary approach to address these challenges through privacy-by-design modelling of Virtual Wards, advocating for collaboration between patients, healthcare professionals, IT experts, cybersecurity specialists, and policymakers.

Theofanis Fotis, Kitty Kioskli, Haralambos Mouratidis
Open Access
Article
Conference Proceedings

Cracking the Code: How Social Media and Human Behavior Shape Cybersecurity Challenges

In an era dominated by digital connectivity, where people are more connected than ever, understanding how humans can securely interact is crucial. This paper delves into the intricate relationship between social engineering and social media, unraveling the multifaceted dimensions that underscore the human aspects of cybersecurity. As technological defenses evolve, adversaries increasingly exploit the vulnerabilities inherent in human behavior (Wang et al., 2020), making it imperative to dissect the interplay between social engineering tactics and the pervasive influence of social media platforms.The study begins by scrutinizing the psychological underpinnings that make individuals susceptible to social engineering attacks, emphasizing the intricate relationship between trust, curiosity, and social connectivity (Albladi & Weir, 2020). Through a comprehensive critical analysis of real-world examples people encounter in their day-to-day lives, the paper exposes the diverse strategies employed by malicious actors to manipulate human cognition and breach organizational defenses. This examination not only dissects the intricacies of phishing, pretexting, and impersonation but also sheds light on the role of emotional triggers and cognitive biases that amplify the effectiveness of these tactics (Wang, Zhu, & Sun, 2021).A significant portion of the paper is dedicated to understanding the role social media plays when it comes to social engineering. The pervasive nature of social media platforms provides a fertile ground for threat actors to extract personal information, exploit social connections, and craft tailored attacks. The paper navigates through the intricate web of privacy erosion, information oversharing, and the amplification of social influence, emphasizing how these factors contribute to the efficacy of social engineering endeavors (Albladi & Weir, 2020).Furthermore, the study explores the role of emerging technologies, such as artificial intelligence and machine learning, in launching social engineering attacks, posing new challenges to the human-centric cybersecurity aspects. To address the ever-changing terrain of social engineering, these emerging technologies advocate for a proactive and flexible strategy that combines technological defenses with a solid understanding of human behavior.In an era dominated by digital connectivity, where individuals are more interconnected than ever, this paper elucidates the critical relationship between social engineering, social media, and cybersecurity. By dissecting psychological vulnerabilities and real-world examples, it underscores the intricate tactics employed by adversaries to exploit human behavior. Emphasizing the role of trust, curiosity, and social connectivity, the study unveils the amplifying effect of emotional triggers and cognitive biases. Focusing on social media's pervasive influence, the paper highlights how platforms contribute to privacy erosion and information exploitation. Acknowledging the challenges posed by emerging technologies, it advocates for a dynamic cybersecurity strategy grounded in both technology and an acute understanding of human behavior.  ReferencesAlbladi, S. M., & Weir, G. R. S. (2020). Predicting individuals’ vulnerability to social engineering in social networks. Cybersecurity, 3(1). https://doi.org/10.1186/s42400-020-00047-5Wang, Z., Sun, L., & Zhu, H. (2020). Defining Social Engineering in Cybersecurity. IEEE Access, 8, 85094–85115. https://doi.org/10.1109/access.2020.2992807Wang, Z., Zhu, H., & Sun, L. (2021). Social Engineering in Cybersecurity: Effect Mechanisms, Human Vulnerabilities and Attack Methods. IEEE Access, 9, 11895–11910. https://doi.org/10.1109/access.2021.3051633

Foteini Markella Petropoulou, Emmanuel Varouchas
Open Access
Article
Conference Proceedings