Human Factors in Cybersecurity

book-cover

Editors: Tareq Ahram, Waldemar Karwowski

Topics: Human Factors in Cybersecurity

Publication Date: 2022

ISBN: 978-1-958651-29-2

DOI: 10.54941/ahfe1002194

Articles

A Metric to Assist in Detecting International Phishing or Ransomware Cyberattacks

Over the past decade, the number of cyberattacks such as ransomware, phishing, and other forms of malware have increased significantly, as has the danger to innocent users. The ability to launch such devastating attacks is no longer limited to well-funded, highly structured organizations including government agencies whose missions may well include cyberattacks.The focus of our study is threats to an individual not from such highly organized institutions, but rather less organized cybercriminal organizations with limited resources.The Internet provides ample opportunities for such criminal organizations to launch cyberattacks at minimal cost. One tool for such lower-level criminal organizations is Google Translate (GT) needed to launch a cyberattack on a user in a relatively advantaged country such as the United States, United Kingdom, or Canada. It has been observed that many such attacks may originate in a lesser developed country (LDC), where the local language is a language not common persons in target countries, for example English.It is a reasonable assumption that informal cyberattackers may not have a command of English and to use English for an attack online they may require a mechanism, such as the no-cost GT.In previous work, a number of authors have attempted to develop an index to measure the efficiency or what might be called an ABA translation. This involves beginning with a test document in language A, then GT to translate into language B, then back again to A. The resulting original text is then compared to the transformation by using a modified Levenshtein distance computation for the A versions.The paper analyzes the process of determining an index to detect if a text has been translated from an original language and location, assuming the attack document has been written in one language and translated using GT into the language of the person attacked. The steps involved in this analysis include:a) Consistency: in order to determine consistency in the use of the ABA/GT process, the primary selection of test is compared with random samples from the test media;b) Expanded selection of languages for translation: prior work has established use of the technique for 12 language pairs. The current work extends analysis to a wider set of languages, including those reported as having the highest levels of cyberattacks.c) Back translation of selected languages: used to extend the quality of those translations are made.d) New language pairs are considered: by analyzing the countries and indigenous languages of the countries paired with the highest levels of cyberattack and the highest levels of cyberdefense, additional language pairs are added to this analysis;e) Comparison to prior results: results found in this paper are used for a proposed network for all language pairs considered in this analysis.The end product is a metric giving a probability of determining the original source language of the cyberattack as compared to the translation to the victim's language, with the expectation that this will allow for an increased likelihood of being able to identify the attackers.

Wayne Patterson, Jeremy Blacksttone
Open Access
Article
Conference Proceedings

Insider Threat: Cognitive Effects of Modern Apathy towards Privacy, Trust, and Security

The purpose of this study was to analyze how contemporary social apathy levels towards privacy have changed across time from before the integration of computers into American society. With private information stored in a computational net of digital information, rather than in personal possession and control, there may be signals towards the increase in the “inattentive” insider Threat to cybersecurity. By using the results of sequential privacy index surveys (Westin, 2003; Kumaragru & Cranor, 2005), along with trait and state subjective questionnaires, changes and possible shared factors in attitude towards privacy were evaluated. It was hypothesized that there would be significant evidence for 1) change over time in concern for privacy, 2) high distrust, 2) high apathy, 3) low motivation, 4) difference between privacy group membership and subjective measure factors. These questionnaires were randomly administered to volunteer undergraduate psychology students at the University of Central Florida (UCF) who were compensated with course extra credit through a university system. The results of this study suggested that privacy concern has lowered over time, there was an overall high level of subjective apathy, and high level of instrumental motivation, which was correlated with the level of privacy concern. This research is looking for indicators of lower concern for privacy, to mitigate the inattentive insider threat in the workplace. Future phases of this research will use the same privacy and subjective questionnaires with the addition of an Implicit Association Test (IAT) for privacy and apathy in the primed and unprimed positions. This research will be used to validate an IAT for privacy, conduct a cross-factor analysis of privacy concern, state, and traits, along with testing for the ability to prime privacy concern.

Valarie Yerdon, Peter Hancock
Open Access
Article
Conference Proceedings

A Didactic Tool for Digital Forensics

Several tools exist for performing digital forensics investigations on evidence data. As the vast variety of options available provides a wide span of choices to select from, this variation itself contributes to the complexity of learning and navigating these tools. To facilitate user’s learning efforts, we present a didactic tool that can be used to explore different digital forensics tools for investigating various evidence files in different OS platforms. We use synthetically generated data in the form of a made up scenario that offers safe, realistic, yet reliable data analysis. The digital forensics tools we use are Autopsy, WinHex, ProDiscover, and StegHide; and we demonstrate the execution of these tools in two different OS platforms as Windows and Mac. Our tool is promising to offer explanation and deep insight into commonly available digital forensics tools, and is offered to serve digital forensics students and/or professionals.

Ebru Cankaya, Anindita Palit, Elissa Williams
Open Access
Article
Conference Proceedings

A Closer Look at Insider Threat Research

Insider threats are a danger to organizations everywhere and no organization is immune to the effects of an insider incident. Organizations suffer from individuals whose actions expose the organization to risk or harm in some ways. This situation includes insiders who intentionally or unintentionally cause actions that bring harm or significantly increases risk to the organization. Insider security breaches have been identified by organizations as a pressing problem with no simple solution. This paper presents a systematic literature review of published, scholarly articles on insider threat research from 2010 to 2020. The focus of this literature review is to survey the topics, methodologies, and theories of current insider threat research. The goal of this literature review is to provide an overview of the trends in insider threat research. Fifty-two studies were identified, and about half the papers dealt with identifying potential insiders through machine learning techniques. The most popular trend was the use of learning-based algorithms, such as neural networks and support vector machines, that classified a user as an insider versus a non-insider. Aside from the popular modeling approach, the other publications included in our review focused on human factors related to insider threat and the common methodology for these papers were the use of surveys and questionnaires. Another trend identified in the literature was the use of behavioral patterns as an insider threat indicator. Lastly, researchers identified best practices for organizations to address insider threats. The outcome of this literature review identified trends, best practices, and knowledge that can be used to further develop insider threat frameworks and methodologies. Furthermore, this literature review presents implications for researchers including challenges, issues, and future research directions.

Ivan Kong, Masooda Bashir
Open Access
Article
Conference Proceedings

Social Engineering and Human-Robot Interactions' Risks

Modern robotics seems to have taken root from the theories of Isaac Asimov, in 1941. One area of research that has become increasingly popular in recent decades is the study of artificial intelligence or A.I., which aims to use machines to solve problems that, according to current opinion, require intelligence. This is related to the study on “Social Robots”. Social Robots are created in order to interact with human beings; they have been designed and programmed to engage with people by leveraging a "human" aspect and various interaction channels, such as speech or non-verbal communication. They therefore readily solicit social responsiveness in people who often attribute human qualities to the robot. Social robots exploit the human propensity for anthropomorphism, and humans tend to trust them more and more. Several issues could arise due to this kind of trust and to the ability of “superintelligence” to "self-evolve", which could lead to the violation of the purposes for which it was designed by humans, becoming a risk to human security and privacy. This kind of threat concerns social engineering, a set of techniques used to convince users to perform a series of actions that allow cybercriminals to gain access to the victims' resources. The Human Factor is the weakest ring of the security chain, and the social engineers exploit Human-Robots Interaction to persuade an individual to provide private information.An important research area that has shown interesting results for the knowledge of the possibility of human interaction with robots is "cyberpsychology". This paper aims to provide insights into how the interaction with social robots could be exploited by humans not only in a positive way but also by using the same techniques of social engineering borrowed from "bad actors" or hackers, to achieve malevolent and harmful purposes for man himself. A series of experiments and interesting research results will be shown as examples. In particular, about the ability of robots to gather personal information and display emotions during the interaction with human beings. Is it possible for social robots to feel and show emotions, and human beings could empathize with them? A broad area of research, which goes by the name of "affective computing", aims to design machines that are able to recognize human emotions and consistently respond to them. The aim is to apply human-human interaction models to human-machine interaction. There is a fine line that separates the opinions of those who argue that, in the future, machines with artificial intelligence could be a valuable aid to humans and those who believe that they represent a huge risk that could endanger human protection systems and safety. It is necessary to examine in depth this new field of cybersecurity to analyze the best path to protect our future. Are social robots a real danger? Keywords: Human Factor, Cybersecurity, Cyberpsychology, Social Engineering Attacks, Human-Robot Interaction, Robotics, Malicious Artificial Intelligence, Affective Computing, Cyber Threats

Ilenia Mercuri
Open Access
Article
Conference Proceedings

Isolating Key Phrases to Identify Ransomware Attackers

Ransomware attacks are a devastatingly severe class of cyber-attacks capable of crippling an organization through disrupting operations or egregious financial demands. A number of solutions have been proposed to decrease the risk of ransomware infection or detect ransomware once a system has been infected. However, these proposed solutions do not address the root of the problem: identifying the adversary that created them. This study takes steps towards identifying an adversary by utilizing linguistic analysis of ransomware messages to ascertain the adversary’s language of origin. Our proposed method begins by using existing ransomware messages. We isolate commonly used phrases by analyzing a number of notable ransomware attacks: CryptoLocker, Locky, Petya, Ryuk, WannaCry, Cerber, GandCrab, SamSam, Bad Rabbit, and TeslaCrypt. Afterwards, we translate these phrases from English to another language and then back to English using Google Translate and calculate the Levenshtein Distance between the two English phrases. Next, we identify the languages that have a Levenshtein Distance greater than 0 for these phrases due to differences in how parts of speech are implemented in the respective languages. Finally, we analyze new ransomware messages and rank the languages from easiest to most difficult to distinguish.

Jeremy Blacksttone, Wayne Patterson
Open Access
Article
Conference Proceedings

Information Security Awareness and Training as a Holistic Key Factor – How Can a Human Firewall Take on a Complementary Role in Information Security?

Human elements have been identified as a factor in over 95% of all security incidents. Current technical preventive, corrective, and defensive mechanisms address intelligent and practical approaches to increase the resilience of information technology (IT) systems. However, these approaches do not fully consider the behavioral, cognitive, and heterogeneous motivations that lead to human failure in the security causal chain. In this paper, we present the Awareness Continuum Management Model (ACM2), which is a role-based and topic-based theoretical approach for an information security awareness and training program that uses Boyd’s observe–orient–decide–act (OODA) loop as a framework. The proposed ACM2 is based on the situational engineering method and regards the human firewall as an integral, indispensable, and complementary part of the holistic approach to increase IT systems’ resilience. The proposed approach can be applied to different types of organizations and critical infrastructure and can be integrated into existing training programs.

Erfan Koza
Open Access
Article
Conference Proceedings

Cyberdefense Adaptive Training Based on the Classification of Operator Cognitive State

To face the increasing number and the variety of cyberattacks, training and adaptation of cyberdefense operators become critical and should be managed all along their careers. Thus, it is necessary to develop adaptive training methods that are able to quickly detect operators' weaknesses and to propose a strategy to reinforce their skills on these points. This paper presents the choice of a cognitive model in order to guide the development of an adaptive training software. In this regard, the paper proposes a review of several elements that contributed to the development of the model.Cyberattacks are continuously increasing in variety and number, and therefore require a constant adaptation from the operator who must react to each attack with rapidity and efficiency. To face these changes, cyber operators must be trained regularly.This training aims to: 1) maintain knowledge of cyber operators up to date, 2) train cyber operators to use new tools and 3) allow cyber operators to appropriately react to new attacks.In this regard, adaptive training softwares support the training of cyberdefense operators in order to improve their performance in real conditions. To propose an adaptive training software, there are several requirements to satisfy such as an ecological environment, a system to adapt the training scenario autonomously and a way to assess the difficulties experienced by the trainee. To support this dynamic and customised adaptation of the training scenario, it is important to detect or predict when errors may occur. For this purpose, behavioural and physiological data can be used to assess the variations in performance and mental workload that can lead to an error. This paper deals with the choice of a cognitive model that could support the design of a software for adaptive training in the cyberdefense field. Such a model would allow us to understand the different cognitive processes used by the operator to perform tasks, and to identify the factors that could contribute to performance decrement. This model can then orient the selection of appropriate physiological and behavioural indicators to measure what parts of the task cause difficulty to the operator.

Yvan Burguin, David Espes, Philippe Rauffet, Christine Chauvin, Philippe Le Parc
Open Access
Article
Conference Proceedings

Exploring Human and Environmental Factors that Make Organizations Resilient to Social Engineering Attacks

In this explorative research social engineering attacks were studied, especially the ones that failed, in order to help organisations to become more resilient. Physical, phone and digital attacks were carried out using a script following the ‘social engineering cycle’. We used the COM-B model of behaviour change, refined by the Theoretical Domains Framework, to examine by means of a survey how Capability, Motivational and foremost Opportunity factors help to increase resilience of organisations against social engineering attacks. Within Opportunity, social influence seemed of extra importance. Employees who work in small sized enterprises (<50 employees) were more successful in withstanding digital social engineering attacks than employees who work in larger organisations. An explanation for this could be a greater amount of social control; these employees work in close proximity to one another, so they are able to check irregularities or warn each other. Also, having a conversation protocol installed on how to interact with outsiders, was a measure taken by all organisations where attacks by telephone failed. Therefore, it is more difficult for an outsider to get access to the organisation by means of social engineering. This paper ends with a discussion and some recommendations for organisations, e.g. the design of the work environment, to help increase their resilience against social engineering attacks.

Michelle Ancher, Erbilcan Aslan, Rick Van Der Kleij
Open Access
Article
Conference Proceedings

Assessing Human Factors and Cyber Attacks at the Human-Machine Interface: Threats to Safety and Pilot and Controller Performance

The current state of automated digital information in aviation continues to expand rapidly as NextGen ADS-B(In) systems become more common in the form of Electronic Flight Bag (EFB) pad devices brought onto the flight deck. Integrated systems including satellites, aircraft, and air traffic control (ATC) data currently are not effectively encrypted and invite exposure to cyber attacks targeting flight decks and ATC facilities. The NextGen ATC system was not designed from the outset to identify and nullify cyber threats or attempts at disruption, and the safety gap has enlarged. Performance error at digital human-machine interfaces (HMI) has been well documented in aviation and now presents a potentially significant threat where the HMI can be more susceptible to human error from cyber attacks. Examples of HMI errors arising from digital information produced by automated systems are evaluated by the authors using HMI flaws discovered in recent Boeing 737-Max accidents. SHELL computer diagrams for both the digital flight deck and ATC facilities illustrate how the system is now interconnected for potential cyber threats and identifies how human factors consequences compromising HMI safety and operator performance present potential dangers. Aviation Safety and Reporting System (ASRS) data are examined and confirm HMI threats. The authors contrast various HMI errors with cyber attack effects on cognition, situational awareness, and decision making. A focused examination to assess cyber attack effects on cognitive metrics suggests cognitive clarity of operators is confounded when confronted with conflicting or confusing indications at the HMI. Difficulty in successfully identifying a cyber attack and the actions taken as human factors countermeasures are illustrated in the context of the HMI environment. The Human Factors Analysis and Classification System (HFACS) is used to show how cyber attacks could occur and be addressed along with a dual-path solution.Keywords: NextGen, Cyber attack, SHELL, HMI, Cognitive load, HFACS

Mark Miller, Sam Holley
Open Access
Article
Conference Proceedings

Navigating through Cyber Threats, A Maritime Navigator’s Experience

Cyber threats are emerging as a risk in the maritime industry. If the navigational systems on board a ship somehow fail to function because of a cyber incident, the navigator is an important asset who is expected to handle the problem and provide a solution to maintain the safety of the crew, the vessel, and the environment. The International Maritime Organization (IMO) urges the shipping industry to be resilient towards cyber threats. To facilitate for enhanced operational maritime cyber resilience, there is a need to understand how navigators interpret cyber threats, which can be essential to safely conduct nautical operations. This paper presents a qualitative study of navigators’ understanding of cyber threats based on interviews with ten navigators, and further provides recommendations for how use of this knowledge can contribute to enhanced maritime cyber resilience.

Erlend Erstad, Mass Soldal Lund, Runar Ostnes
Open Access
Article
Conference Proceedings

A Coherence Model to Outline Obstacles and Success Factors for Information Security from the CISO's Point of View

Against the backdrop of the progressive digitalization of Critical Infrastructures (CRITIS), especially within the socio-technical fields, this paper addresses the identification of obstacles as well as critical, technical, and human success factors, which play an essential role in efficient information security management. Furthermore, the focus is also put on the crystallization of differentiated views regarding the meaningfulness and usefulness of laws. To this end, we conducted a study with 86 chief information security officers, including CRITIS with 76% participation and non-CRITIS with 24% participation, data center operators (14), water and wastewater utilities (25), energy supply companies (33), and healthcare stakeholders (14) in Germany. The study is based on a methodological pluralistic orientation in which, in addition to the integration of quantitative methods for empirical data collection, other analytical approaches are used to determine coherence and correlation. As an artifact, the empirically validated factors are compiled intersectoral in a coherence model and related in terms of causality.

Erfan Koza, Asiye Öztürk
Open Access
Article
Conference Proceedings

Privacy Concerns about Smart Home Devices: A Comparative Analysis between Non-Users and Users

Privacy concerns of smart home device (SHD) users have been largely explored but those of non-users are under-explored. The success of smart home technology comes to fruition only when concerns of both users and non-users are addressed. Understanding of non-user concerns is essential to inform the design of user-centric privacy-preserving SHDs, facilitate acceptance, and bridge the digital divide between non-users and users. To address this gap, we conducted a survey of SHD non-users and comparatively analyzed their privacy concerns with those of users.Methods: We used university email list-servs, snowball sampling and random sampling methods to recruit participants (n=91) for an IRB-approved online survey, titled ‘smart home study’. Our pre-tested questionnaire asked about SHD (non-)usage, privacy concerns (open-ended), suggestions for developers and demographics. We followed a mixed-methods approach to analyze privacy concerns (qualitative/thematic), explore non-use reasons (qualitative/thematic), compare non-users and users concerns (quantitative), and analyze design suggestions (qualitative/thematic). Results: Thematic analysis of privacy concerns of non-users (n=41) and users (n=50) by two researchers performing open-coding (Cohen’s kappa = 0.8) resulted in 17 codes. We then performed axial coding to generate three thematic areas of privacy concerns. The first theme was ‘data collection concerns’ which included five codes: recording audio/video, tracking occupancy, listening to private conversations, monitoring usage/behavior, and identity theft. The second theme was ‘data sharing concerns’ which included four codes: selling data, third party data access, leakage without consent, and marketing data. The third theme was ‘data protection concerns’ which included eight codes: hacking, data handling, protecting data, secondary use, aggregation, data abuse, data loss, and fraud. The three privacy concerns themes belong to the personal communication and personal data privacy dimensions of privacy. Chi-square test between non-users and users showed the privacy concerns of non-users differed significantly (X2=8.46, p<0.05) from users. Non-users reported higher level of concerns in data collection and data protection themes than those of users (46% vs 24% and 34% vs 30% respectively). However, non-users reported fewer concerns in the data sharing theme than those of users (15% vs 28% respectively).Most non-users reported their non-use reason to be privacy concerns (68%). Other non-use reasons included lack of interest in SHDs (32%), cost (22%), lack of perceived usefulness (12%), insecurity or potential of hacking (10%), and perceived difficulty of usage (7%).The thematic analysis of participants’ suggestions for developers resulted in four main themes: (a) data anonymization and minimization, (b) data protection and security, (c) transparent data use policies, and (d) user-centric practices. Based on our findings, we recommend that developers address the data collection and data protection concerns to allow SHD non-users to consider using them. In addition, we recommend addressing data sharing concerns to retain trust of current users. We discuss some guidelines in the paper.Conclusion: This paper contributes by eliciting SHD non-user privacy concerns and provides insights on addressing the concerns, which will be useful for developers towards the design of user-centric privacy-preserving SHDs.

Chola Chhetri, Vivian Genaro Motti
Open Access
Article
Conference Proceedings

A Software Security Study among German Developers, Product Owners, and Managers

Online news portals report almost daily on security incidents in all kinds of software products in finance, health, and engineering. Moreover, multiple security reports conclude that there is a growing number of security vulnerabilities, attacks, and incidents. This raises the question of the extent to which companies address software security while developing and operating their products. This paper reports on the results of an extensive study among developers, product owners, and managers in Germany. Our results show that ensuring security is a multi-faceted challenge for German companies, involving low awareness, inaccurate self-assessment, and a lack of competence on the topic of secure software development among all stakeholders. Thus, there is an urgent need to improve the current situation.

Stefan Dziwok, Sven Merschjohann, Thorsten Koch
Open Access
Article
Conference Proceedings

From Security-as-a-Hindrance Towards User-Centred Cybersecurity Design

Cybersecurity controls in the workplace are viewed by many people as a hindrance that results in wasted time. End-users often bypass controls to get their work done and because of this, even the technically most secure systems can become unsecured. One crucial reason for this could be a lack of attention paid to usability factors by the software development teams that de-sign controls. In this paper I investigate how to design cybersecurity controls in such a way that the user is more likely to behave in a secure manner when confronted with these controls. I put forward three practices that, when employed alongside each other, hold the promise to produce usable and effective cybersecurity controls.

Rick Van Der Kleij
Open Access
Article
Conference Proceedings

Security in Vehicle-to-Infrastructure Communications

By 2020, the number of connected vehicles will reach 250 million units. Thus, one of five vehicles worldwide will count on any wireless connection. Functional areas such as telecommunications, infotainment, automatic driving, or mobility services will have to face the implications caused by that growth. As long as vehicles require exchanging information with other vehicles or accessing external networks through a communication infrastructure, these vehicles must be part of a network. A VANET is a type of mobile network formed by base stations known as Road Side Units (RSU) and vehicles equipped with communication units known as Onboard Units (OBU). The two modes of communication in a VANET are Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I). Some authors consider that V2I communication has more advantages than V2V communication because V2I communication provides services such as driving guidance or early warning for drivers. This consideration has meant that researchers show more interest in this mode of communication. Likewise, others affirm that the problem of V2I communication is its security. This review focuses on knowing the most relevant and current approaches on security in V2I communication. Among the solutions, we have authentication schemes based on Blockchain technology, Elliptic Curve cryptography, key insulation strategy, and certificateless aggregate signature technique. Also, we found security arquitectures and identification schemes based on SDN, NFV, and Fog / Edge / Cloud computing. The proposals focus on resolving issues such as the privacy-preserving, high computational work, regular updating and exposure of secret keys, large number of revoked pseudonyms lists, lack of scalability in networks, and high dependence on certification authorities. In addition, these proposals provide countermeasures or strategies against replay, message forgery, impersonation, eavesdropping, DDoS, fake information, modification, Sybil, man-in-the-middle, and spoofing attacks. Finally, we determined that the attacks in V2I communications mostly compromise security requirements such as confidentiality, integrity, authentication, and availability. Preserving privacy by reducing computational costs by integrating emerging technologies is the direction toward security in vehicular network points.

Pablo Marcillo, Ángel Leonardo Valdivieso Caraguay, Myriam Hernandez-Alvarez
Open Access
Article
Conference Proceedings

Estimating Attackers’ Profiles Results in More Realistic Vulnerability Severity Scores

Digitalization is moving at an increasing speed in all sectors of the economy. Along with it the cybersecurity threats and attacks continue to rise rapidly. Enterprises in all economic sectors are imposed to constantly assess the vulnerabilities (weaknesses) of their Information and Communication Systems (ICT) and further estimate their severity, to avoid exploitability by targeted cyber-attacks. Attacks may have catastrophic consequences (impacts), including the disruption or termination of operations, economic damages, long-term damaged reputation, customer loss, lawsuits, and fines. Organisations need to undertake mitigating actions and technical controls to lower the severity of the vulnerabilities and protect their ICT assets. However, security measures are expensive, especially for small companies. Cybersecurity is considered a burden to the Small-Medium Enterprises (SMEs) and not a marketing advantage, while cost is their biggest challenge. We need to be as realistic as possible in the vulnerability severity scoring, to decrease the security costs for smaller companies and simultaneously prevent potential attackers to exploit their assets. Identifying the potential attacker for each sector and company is the first step in building resilience. The classifications for attackers are usually based on whether they are internal, or by their means and capabilities, such as knowledge of the organization’s resources, including personnel, facilities, information, equipment, networks, and systems. In 2021, ENISA published a sector-specific taxonomy based on opportunities, means, motives and sectors or products they wish to attack. In all existing classifications, psychological, behavioural, and social traits of the attackers are neither measured nor considered. The existing security scoring systems concentrate on technical severity, not considering the human factors with practical methods such as via the external or internal attackers’ profile in their calculations. The Common Vulnerability Scoring System (CVSS) is a standard and widely adopted measure for vulnerabilities’ severity. CVSS assumes that the potential attacker will be highly skilled, but it does not consider any other human factors which may be involved. Our work, in the latest years, targets to bridge psychosocial advancements, including human, behavioural, and psychosocial factors, with cybersecurity efforts to improve and reach a realistic cyber-resilient state within the information systems. The overarching objective of the present paper is to further contribute to providing realistic vulnerability severity scoring. Our main aim is to show that the CVSS scores are not unique for every vulnerability but vary depending on the potential attacker. Based on the organisations’ cyber threat intelligence (CTI) level, the sectoral threats can be identified, and the profiles of their potential attackers can be predicted. In this paper, we measure the attackers’ profiles and use these values in the CVSS calculator to score the vulnerabilities’ severity more accurately. Considering practical implications, multiple interventions and suggestions at various levels are presented to tackle the ongoing cybersecurity internal and external threats and also enhance the CVSS to provide more realistic and accurate results.

Kitty Kioskli, Nineta Polemi
Open Access
Article
Conference Proceedings

Non-Experts' Perceptions Regarding the Severity of Different Cyber-Attack Consequences: Implications for Designing Warning Messages and Modeling Threats

Cyber-defenders must account for users’ perceptions of attack consequence severity. However, research has yet to investigate such perceptions of a wide range of cyber-attack consequences. Thus, we had users rate the severity of 50 cyber-attack consequences. We then analyzed those ratings to a) understand perceived severity for each consequence, and b) compare perceived severity across select consequences. Further, we grouped ratings into the STRIDE threat model categories and c) analyzed whether perceived severity varied across those categories. The current study’s results suggest not all consequences are perceived to be equally severe; likewise, not all STRIDE threat model categories are perceived to be equally severe. Implications for designing warning messages and modeling threats are discussed.

Natalie Lodinger, Keith Jones, Akbar Siami-Namin, Ben Widlus
Open Access
Article
Conference Proceedings