Artificial Intelligence and Social Computing

book-cover

Editors: Tareq Ahram, Jay Kalra, Waldemar Karwowski

Topics: Artificial Intelligence & Computing

Publication Date: 2024

ISBN: 978-1-958651-98-8

DOI: 10.54941/ahfe1004635

Articles

Why Do or Don’t You Provide Your Knowledge to an AI?

This study examines the factors that influence individuals' readiness to share knowledge with artificial intelligence (AI) in organizational settings. With the increasing integration of AI into business processes, there are benefits such as increased operational efficiency and decision support. AI systems require the expertise of skilled employees to adequately support decisions and improve performance. However, providing knowledge and experience can also pose a risk to employees as it could jeopardize job security. Using an explorative approach, including literature review and qualitative interviews, this study identifies key motivators and barriers for providing knowledge to an AI. At the individual level, benefits such as learning opportunities encourage contribution. At the team-level, motivators include individual reliance on collective knowledge. Cultural norms such as reciprocity in sharing also play a role. However, there are barriers, including fear of job loss due to automation, interpersonal issues such as criticism, and distrust of both management and AI. Strategies to positively influence these factors include strengthening employability, transparent management communication and communities of practice to mutually share experiences with AI.

Philipp Renggli, Toni Waefler
Open Access
Article
Conference Proceedings

Application of Large Language Models in Stochastic Sampling Algorithms for Predictive Modeling of Population Behavior

Agent-based modeling of human behavior is often challenging due to restrictions associated with parametric models. Large language models (LLM) play a pivotal role in modeling human-based systems because of their capability to simulate a multitude of human behavior in contextualized environments; this makes them effective as a mappable natural language representation of human behavior. This paper proposes a Monte Carlo type stochastic simulation algorithm that leverages large language model agents in a population survey simulation (Monte-Carlo based LLM agent population simulation, MCLAPS). The proposed architecture is composed of a LLM-based demographic profile data generation model and an agent simulation model which theoretically enables complex modelling of a range of different complex social scenarios. An experiment is conducted with the algorithm in modeling quantitative pricing data, where 9 synthetic Van Westendorp Price Sensitivity Meter datasets are simulated across groups corresponding to pairings of 3 different demographics and 3 different product types. The 9 sub-experiments show the effectiveness of the architecture in capturing key expected behavior within a simulation scenario, while reflecting expected pricing values.

Yongjian Xu, Akash Nandi, Evangelos Markopoulos
Open Access
Article
Conference Proceedings

Human-centered Explainable-AI: An empirical study in Process industry

This paper presents an empirical study on the explainability of transformer models analyzing time series data, a largely unexplored area in the field of AI explainability. The study is part of an ongoing EU-funded project which applies a human-centered approach to developing explainable AI solutions for the process industry. Here, we investigate the choice of explainer mechanisms and human factor needs when developing eXplainable Artificial Intelligence (XAI) for operators of two industrial contexts: copper mining and paper manufacturing. On-site evaluations were conducted in these settings involving control room operators to test the prototype developed in the project. The results indicate that the method of feature importance alone was not sufficient to provide explanations that are tailored to individuals and situations, as required by users. Overall, our empirical data supports “social” explanations for AI users and demonstrates the value of involving end users in the design process of effective XAI solutions. We also provide design implications which address human factor needs for such solutions in industrial settings.

Yanqing Zhang, Emmanuel Brorsson, Leila Methnani, Elmira Zohrevandi, Nilavra Bhattacharya, Andreas Darnell, Rasmus Tammia
Open Access
Article
Conference Proceedings

Predictive functions of artificial intelligence for risk assessment in remote hybrid work

Remote hybrid work risk assessment is an obligation for the employer according to Occupational Safety and Health (OSH) regulations. Risk management requires the cooperation of the worker, who is now responsible for recognizing and managing hazards, necessitating specific technical training. Generative Artificial Intelligence (AI) technologies can support the knowledge needs of both workers and employers as effective tools for prevention in occupational health and safety, respecting privacy regulations and avoiding remote control of workers. Researchers from INAIL, Universitas Mercatorum, and the University of Sannio are developing an AI-based assistant for assessing risk in remote and hybrid work, facing challenges in assistant training due to limited availability of data on incidents and illnesses related to remote work. The generative AI prototype will be able to evaluate the relationship between remote work activities and types of injuries, using domestic injury data to identify patterns and high-risk areas. By integrating Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), the hybrid model will enable a dynamic and comprehensive analysis of hazards, contributing to a better understanding of risk factors in hybrid work contexts.

Giuditta Simoncelli, Mario Luca Bernardi, Laura De Angelis, Sara Anastasi, Michela Bonafede, Emanuele Artenio, Riccardo Pecori
Open Access
Article
Conference Proceedings

Evaluation of a Scale to Assess Subjective Information Processing Awareness of Humans in Interaction with Automation & Artificial Intelligence

Subjective Information Processing Awareness (SIPA) describes how users experience the extent to which a system enables them to perceive, understand and predict its information processing. With the rising interdependence of information processing in Human-AI interaction, research methods for assessing user experience in automated information processing are needed. The objective of the present research was the empirical evaluation of the SIPA scale as an economical method to assess SIPA, as well as the construction of and comparison with a version in plain language. To this end, two empirical studies were conducted (NS1= 317, NS2 = 230) to enable scale analysis. Results showed that the SIPA scale achieves excellent reliability and expected correlations with connected constructs, e.g., trust and perceived usefulness of AI systems. In addition, no benefits of a plain language variant were found. Based on the results, the SIPA scale appears to be a promising tool for examining user experience of systems with automated information processing.

Tim Schrills, Marvin Sieger, Marthe Gruner, Thomas Franke
Open Access
Article
Conference Proceedings

Vector Result Rate (VRR): A Novel Method for Fraud detection in mobile payment systems

Mobile payment systems are becoming more popular due to the increasing number of smartphones, attracting fraudsters' attention. Therefore, existing researchers have developed various fraud detection methods using supervised machine learning. However, sufficient labeled data are rarely available, and their detection performance is negatively affected by severe class imbalance in financial fraud data. This study aims to propose a new model entitled Vector Result Rate (VRR) for fraud detection based on deep learning while considering the economic consequences of fraud detection systems. The proposed framework is experimentally implemented on a large dataset containing more than six million mobile phone transactions. A comparative evaluation of existing machine learning methods designed to model unbalanced data and detect outliers is performed for the comparison. The results show that the VRR achieves the best results by integrating several classification algorithms with supervision and classifiers regarding standard classification criteria.

Arman Daliri, Mahdieh Zabihimayvan, Kiarash Saleh
Open Access
Article
Conference Proceedings

Positive Interactions with Intelligent Technology through Psychological Ownership: A Human-in-the-Loop Approach

While human-agent interaction is intended to ease daily and critical burdens on human operators, issues such as trust, lack of transparency, and system performance often negatively impacts the process to yield sub-optimal outcomes. Here, we propose a human-in-the-loop approach, in which users train an AI, as a potential avenue to remedy this complex problem. We use Tetris® as a use case and require participants to provide trial-by-trial inputs to train the AI model. Improvements in trust correlated with increased satisfaction levels during the training process but not final AI performance. Users’ preference for their trained AI, compared to a pre-trained AI, demonstrated increased improvements in trust. Personality and AI literacy did not affect these relationships. Results suggest positive perceptions towards AI systems can be elicited through psychological ownership pathways. We discuss how users’ involvement in constructing the system may influence ownership giving rise to positive human-agent interactions.

Bianca Dalangin, Stephen Gordon, Heather Roy
Open Access
Article
Conference Proceedings

Episodic Memory with Interactive 3D Sequential Graph

Episodic memory can be viewed as a learning process, not from existing knowledge, but from massive streams of news and episodic events. Sequences of episodic events can be used to predict future events. In this study, we assume that long-term memory can be simulated with a spatial and temporal database. We explore the 3D sequential graphs that offer a selection of methods to visualize episodic memory in the 3D space, a network of sequences of values, or a statistical summary of information about groups or subsets such as frequencies, ranges, and distributions. The graph can be accessed through a tablet, laptop, and AR/VR headsets. Users can navigate the graph with hand gestures, a game controller, or a mouse. The semantic graph is also connected to multimedia content such as video footage and spatial soundtracks because our episodic memory is multimedia. Finally, the applications of episodic memories are presented, including disastrous scenarios of laparoscopic cholecystectomy and malware distribution networks.

Yang Cai
Open Access
Article
Conference Proceedings

Meaningful Emoji: A Preliminary Exploratory Study of Graphic Symbols Usage for Health Communication

Emoji have become an important component of visual language since they were officially introduced into the Unicode Character Database in 2010 and have become commonplace in most people's lives. These graphic symbols were popularly used worldwide due to their widespread use on social media platforms. More and more researchers have also used emoji as stimulus cues to explore the relationship between emotions and attitude in the fields of communication, online behavior, health, food safety, and other field of study. Gboard, a virtual keyboard developed by Google, announced Emoji Kitchen in 2020 for Android users, and now also open to iOS users, and even launched a web version for computer users to use. Emoji Kitchen is a special feature that allows users to combine two different emoji symbols into brand-new ones. By using Emoji Kitchen as a generator tool, this study aims to investigate what key visual elements to replenish the emoji items to meet users' communication needs, particularly in the health context. This study firstly presumes emoji as elements of visual language to represent a noun, a verb, and an interjection. Secondly, an exploratory investigation was conducted to test how people make sense of the relationship between multiple emoji, within which to demonstrate how various arrangements of emoji affect the semantic meanings. Results show that current standardized and widely used emoji are inadequate for users to express freely online in the context of health issues. What to choose and how to use the emoji when people express their emotions in the context of health were defined. The consensus in visual preferences was initially discovered. Our previous studies have proved that emoji are highly capable of promoting more comprehensible and persuasive information communication, as well as improving reading speed. This study finds further details about how emoji play roles as vital components of visual language in communication.

Tingyi S. Lin, Sih-wei Li
Open Access
Article
Conference Proceedings

Exploring the Use of GenAI in the Design Process: A Workshop with Design Students

The introduction of artificial intelligence into the design process is bringing fundamental changes. AI can be used to improve or even radically change the process of designing digital solutions (Agner et al., 2020). In particular, AI has been hailed for providing many important possibilities, including the possibility of greater customization at scale, more precise analysis of the use of digital solutions, and aiding the creative process of designers (Oh et al., 2018).For this reason, it is important to continue to explore how these technologies can be developed and integrated into the design process in ways that facilitate and improve the work of designers.It is argued that the introduction of new digital tools such as AI into the design process brings about major changes in the nature of designers' work (O'Donovan et al., 2015) and that these changes bring about radical changes in ways of working, as well as some potentially negative and unintended consequences (Gaffney, 2017). The main problem encountered is that the use of such new and complex technologies to date means that designers and end users may have limited understanding of the ways in which these technologies can influence the design and thus also its outcome, i.e., the product/service.This is not necessarily a skills issue, but also relates to the fact that artificial intelligence is still a black box that still does not provide comprehensive explanations of its decision-making process (Asatiani et al., 2020).Most of the studies analyzed believe that design should not be merely automated by AI, with complete removal of the designer. It is believed that it should be an AI-supported process, a co-design, and that this is more likely to be used to automate only the most repetitive aspects of the process. Although AI-designer collaboration requires designers to enrich their skills, and many implementations of the artificial partner, the research collected shows that AI-designer interaction achieves better results than design processes in which AI is absent or the process is fully automated. Given this background, this paper investigates some aspects of designers' behavior during co-design. It was noted that designers establish ways of relating to the artificial partner that vary accordingly to how they can understand (or rather, believe they understand) the black box, i.e., the information processing procedure of AI. For example, the design process is made smooth when the designer can find ways to communicate its intentions to the machine in a way that facilitates its response, even if the content is complex. Through a review of the literature and direct observation of workshops where young designers test and use AI, some valuable observations and insights have been clustered.

Elena Cavallin
Open Access
Article
Conference Proceedings

Development of an Explainable Pre-Hospital Emergency Prediction Model for Acute Hospital Care

This study introduces an eXplainable Artificial Intelligence (XAI) designed to predict which emergency patients require acute hospital care in pre-hospital phase and provide explanations for its reasoning. Emergency medical care is broadly divided into two stages: pre-hospital and in-hospital stages. Various information gathered during the emergency activities performed by paramedics in the pre-hospital stage and while transporting patients is crucial in describing the emergency patient’s condition. However, key pre-hospital information, important for the in-hospital medical care of emergency patients, is filtered based on the ambiguous memory of the paramedics, and is verbally shared in a condensed form via phone or radio when transmitted to the hospital. To address this issue, we have developed a model that predicts emergency patients based on pre-hospital information integrating an ensemble model and advanced XAI techniques. This proposed model not only predicts emergency situations requiring acute hospital care but also ensures the model's predictive processes remain transparent and interpretable for medical professionals, addressing the critical need for an information linkage system between the pre-hospital and in-hospital phases.

Minjung Lee, Eun Jung Kwon, Hyunho Park
Open Access
Article
Conference Proceedings

Dyadic Interactions and Interpersonal Perception: An Exploration of Behavioral Cues for Technology-Assisted Mediation

Mediators aim to shape group dynamics in various ways, such as improving trust and cohesion, balancing participation, and promoting constructive conflict resolution. Technological systems used to mediate human-human interactions must be able to continuously assess the state of the interaction and generate appropriate actions. In this paper, behavioral cues that indicate interpersonal perception in dyadic social interactions are investigated. These cues may be used by such systems to produce effective mediation strategies. These are used to evaluate dyadic interactions, in which each interactant rates how agreeable or disagreeable the other interactant comes across. A multi-perspective approach is taken to evaluate interpersonal affect in dyadic interactions, employing computational models to investigate behavioral cues that reflect interpersonal perception in both the interactant providing the rating and the interactant being rated. The findings offer nuanced insights into interpersonal dynamics, which will be beneficial for future work on technology-assisted social mediation.

Hifza Javed, Nina Moorman, Thomas Weisswange, Nawid Jamali
Open Access
Article
Conference Proceedings

AI-based learning recommendations - possibilities and limitations

The results of the EU project "Career Intelligence", which is being funded by the EU for 2.5 years, are explained and critically reflected upon in this article. The intention of the project is to further develop the use of a learning platform "Career 4.0", which has been tested throughout Europe, to promote entrepreneurial and digital skills among young people with the help of an AI-based learning assistant (Kröll, M./Burova-Keßler, K. (2022).The starting point for the article is the following questions: To what extent can the virtual learning assistant succeed in providing personalised learning recommendations with regard to the development of entrepreneurial and digital skills? What technical and content-related requirements should be met so that the virtual learning assistant can make a contribution to recommendations that promote learning? Which factors are decisive in the development of favourable learning recommendations? How and with the help of which criteria can the quality of learning recommendations be guaranteed? The insights gained were and are used in the project to contribute to the professionalisation of young people's personal development plans and are the starting point for designing the interaction between the young people and the virtual learning assistants in a way that promotes learning.The scientific debate contains numerous references to the use of AI tools in vocational education and training and the possibility of developing learning recommendations with their help (Biel et al., 2019; Bäsler & Sasaki, 2020). These include (a) the preparation of the learning offer to promote learning objectives, (b) the recording and evaluation of learning processes and outcomes by AI, (c) the provision of personalised recommendations for the learner, (d) enabling the further development of the relevant competences and (e) increasing the probability of achieving the learning objectives. This article examines these indications and deals with the question of the extent to which these general promises can be kept. To this end, the results of a potential and resistance analysis from the perspective of the users of the learning platform are discussed.It is known from a large number of empirical studies that the intensive use of a learning platform, such as the Career 4.0 learning platform, depends to a large extent on the facilitation and promotion of interaction (Kröll & Burova-Keßler 2023). The development and establishment of a virtual learning assistant is aimed precisely at promoting the interaction of young people (mentees) in the context of the learning platform. For this to succeed, the dialogues between the learning assistant and the young person are of crucial importance. This also includes the recommendation of learning content by the virtual learning assistant. This raises the question of how the dialogues can be designed to promote interaction between the young person and the virtual learning assistant. It proves useful to involve the young people in the development of the dialogue. However, these efforts have their limits.In the EU project, workshops were held to develop learning recommendations. A central focus was on the development of criteria that are particularly relevant for the design of learning recommendations. The following aspects were emphasised as particularly important: The following aspects were emphasised as particularly important: (a) the learner's personal learning goals (b) their interests and strengths and (c) the language in which learners communicate with each other. For the further development of the personal development plan, it is crucial to first concretise which goals the learners are pursuing. In doing so, it is beneficial to refer to the theoretical approaches of goal theory (Terblanche et al., 2021).

Martin Kröll, Kristina Burova-Keßler
Open Access
Article
Conference Proceedings

A Novel Agent-Based Framework for Conversational Data Analysis and Personal AI Systems

This paper introduces a novel agent-based framework that leverages conversational data to enhance Large Language Models (LLMs) with personalized knowledge, enabling the creation of Artificial Personal Intelligence (API) systems. The proposed framework addresses the challenge of collecting and analysing unstructured conversational data by utilizing LLM agents and embeddings to efficiently process, organize, and extract insights from conversations. The system architecture integrates knowledge data aggregation and agent-based conversational data extraction. The knowledge data aggregation method employs LLMs and embeddings to create a dynamic, multi-level hierarchy for organizing information based on conceptual similarity and topical relevance. The agent-based component utilizes an LLM Agent to handle user queries, extracting relevant information and generating specialized theme datasets for comprehensive analysis. The framework's effectiveness is demonstrated through empirical analysis of real-world conversational data and a user survey. However, limitations such as the need for further testing of scalability and performance under large-scale, real-world conditions and potential biases introduced by LLMs are acknowledged. Future research should focus on extensive real-world testing and the integration of additional conversational qualities to further enhance the framework's capabilities, ultimately enabling more personalized and context-aware AI assistance.

Bartosz Kurylek, Arthur Camara, Akash Nandi, Evangelos Markopoulos
Open Access
Article
Conference Proceedings

Development of Neural Networks for Deepfake Recognition

Nowadays, the creation of deepfakes for various activities is widespread: sometimes for the sake of entertainment, and sometimes for malicious purposes. In the second case, such deepfakes can potentially harm a person. Neural networks trained to recognize deepfakes can become an actual tool for combating malicious fakes.This article considers neural network architectures which can be used to solve the problem of recognizing photo and video deepfakes, describes datasets that can help in training neural networks for the task, and determines what preprocessing is needed for different datasets. Particular attention is paid to possible options for combining the results of several trained models.

Alina Latipova, Maria Yadryshnikova
Open Access
Article
Conference Proceedings

Evaluating explainability of time series models: A user-centered approach in industrial settings

This paper investigates methods for evaluating the explainability of transformer models analyzing time series data, a largely unexplored area in the field of explainable AI (XAI). The study focuses on application-grounded methods involving human subject experiments with domain experts. On-site evaluations were conducted in two industrial settings involving 14 control room operators. The evaluation protocols consisted of methods to measure the metrics of subjective comparison, forward simulatability, and subjective satisfaction. The results indicate that the chosen combination of evaluation metrics provide a multi-faceted assessment on quality and relevance of explanations from an operator’s perspective in industrial settings, in turn contributing to the field of user-centered XAI evaluation, particularly in the context of time series data and offers insights for future work in this area.

Emmanuel Brorsson, Yanqing Zhang, Elmira Zohrevandi, Nilavra Bhattacharya, Andreas Theodorou, Andreas Darnell, Rasmus Tammia, Willem Van Driel
Open Access
Article
Conference Proceedings

Prompt my prototype: NaiVE Framework for Artificial Intelligence use in Engineering Product Development

Through this paper we present a framework aimed into helping educators to incorporate the Artificial Intelligence in their classes, particularly focused in Engineering Product Development. As students become more acquainted with the possibilities of Artificial Intelligence, it brings challenges in the way they learn, research and analyze the information supplied by the A.I. systems. This NaiVE framework puts emphasis in the students' use of their technical expertise and critical thinking by explicitly requesting to follow a set of steps that will guide them through obtaining the best out of the Artificial Intelligence outputs. The framework was validated with three different case studies including students from different engineering majors, seniorities and courses carrying out a 5-week project with an industrial partner: the first one being "Dynamical Design" for mechanical engineers in their sophomore year, who used generative design with Autodesk Fusion360 to propose new solutions for an All-Terrain Vehicle; the second one, "Mechatronic Design", was a course for Mechatronic Engineers in their junior year, who had to come up with an innovative proposal for warehouse logistics and provide a Product Requirements Document (PRD), some Product Design Specifications (PDS), and a prototype of their solution (for this, they were instructed in the correct use of ChatGPT and Teachable Machine); the third course, named "Technological Entrepreneurship", was an optative course for senior year undergraduate students of the robotic, mechatronic, mechanic, computer science, chemical, biotechnology, nanotechnology and data science majors working together with senior marketing students of a course in Analytics and Advanced Market Intelligence: the project consisted in developing a technological solution to raise brand awareness for a Civil Association amongst younger generations, and the use of Mid journey was the AI tool used to adapt the proposal to the aesthethics of the training partner and generate eye-catching results.

Donovan Esqueda Merino, Oliver Gómez Meneses, Hector Rafael Morano Okuno, Ricardo Jaramillo Godinez, Daishi A Murano Labastida, María De Los Angeles Ramos Solano, David Higuera Rosales, Rafael Caltenco Castillo, Karla Rodríguez Gómez, Iván Díaz De León Rodríguez
Open Access
Article
Conference Proceedings

Unveiling Mental Health Insights: A Novel NLP Tool for Stress Detection through Writing and Speaking Analysis to Prevent Burnout

Nowadays, innovative approaches that precisely identify and treat health-related problems are becoming more and more necessary in a time of rapid technological advancement and growing mental health awareness. Given the prevalence of mental health issues, different tools that employ Artificial Intelligence to support rapid and effective interventions have been developed. This study focuses on the relationship between language expression and mental health, recognizing subtle nuances in both written and spoken communication as potential stress indicators and presenting a novel AI enhanced tool for autonomous and passive stress detection.Specifically, in our study data scientists and psychologists collaborate to create and validate a groundbreaking knowledge base. This innovative database combines psychometrics, biometrics, and linguistic analysis to provide a comprehensive evaluation of stress levels. We used biomedical indicators, such as blood pressure, heart rate variability (HRV), and cortisol levels correlations to validate the results. The multidisciplinary team brought together expertise from data science and psychology to create a novel database with a wide range of sentences that have been annotated with matching stress levels.Thanks to this strong psychometric framework for correlating language manifestation of stress with clinical diagnosis, we developed the first, to our knowledge, NLP (Natural Language Processing) tool for autonomous and passive stress detection. This includes a variety of emotional and cognitive stress indicators to provide a deeper understanding of stress that takes into account both subjective experiences and objective manifestations. Initial results show a strong relationship between the biomedical markers and the stress scores obtained from language analysis. By combining data science techniques with psychometric insights, our stress detection achieves 83% in terms of F1 score, providing a more complete picture of a person's stress profile.During the entire study, ethical considerations were taken into account, following well defined data privacy and protection protocols. In fact, before any data was added to the database, participants were carefully informed about the purpose of data collection.Workplace communication platforms may be combined with our NLP technology to track employee well-being in a professional context. This includes real-time alerts to managers and HR specialists, allowing for timely interventions and promoting a collaborative and positive work environment. The strong correlation between clinical metrics and linguistic semantic choices represents a significant step toward the reform of mental health care. In addition the impressive accuracy of the tool we developed provides a reliable support system for spotting stress symptoms in both written and spoken communication. This should help us to change the way we think about stress, assisting us to assess the presence of a burnout condition before it escalates into more serious health issues. The implementation of this technology into various elements of daily life has the potential to revolutionize society perceptions on mental health, allowing for a more in-depth knowledge of the multiple components involved with stress.

Matteo Mendula, Silvia Gabrielli, Francesco Finazzi, Cecilia Dompe', Mauro Delucis
Open Access
Article
Conference Proceedings

Application of Long Short-Term Memory (LSTM) Autoencoder with Density-Based Spatial Clustering of Applications with Noise(DBSCAN) on Anomaly Detection

Early fault diagnosis of equipment based on the current condition assessment is one of the commonly used methods of CBM (condition-based maintenance). It refers to mining the impending fault characteristics from a large amount of production data in long-term operation (Luo et al., 2019). However, these data are huge multivariate data causing a difficulty in extracting features manually. In manufacturing scenario, the majority of machines are in a normal state and the abnormalities are relatively rare which makes the collected data occurring data imbalancing.This study explores the use of LSTM (Long Short-Term Memory) autoencoder combined with DBSCAN (Density-based spatial clustering of applications with noise) under the condition of data imbalance. The reconstruction error of the model after training is used as an evaluation index where the errors of each time point between the reconstruction sequence and the actual sequence are calculated and inputted for classification in the DBSCAN model.In this study, a water distribution system dataset from the SKAB (Skoltech Anomaly Benchmark) was used to verify the anomaly detection of our proposed model. Our model shows the F1-score of 0.8025 which is better than the four models proposed by Moon et al. in 2023. With a LSTM autoencoder, the proposed DBSCAN classification model can avoid the difficulty of setting a threshold value in classification.

Chauchen Torng, Hehe Peng
Open Access
Article
Conference Proceedings

Equilateral Active Learning (EAL): A novel framework for predicting autism spectrum disorder based on active fuzzy federated learning

Autism Spectrum Disorder has a significant impact on society, and psychologists face a crucial challenge in identifying individuals with this condition. However, there is no definitive medical test for autism, and artificial intelligence can assist in diagnosis. A recent study outlines a framework for diagnosing autism spectrum disorders using Equilateral Active Learning (EAL). EAL incorporates three commonly used machine learning techniques: active learning, federated learning, and fuzzy deep learning. The framework integrates four robust datasets of children, teenagers, young adults, and adults using federated and fuzzy deep learning. Using EAL, autism spectrum disorder can be diagnosed with 90% accuracy, which is comparable to several machine learning methods, including statistical, traditional, modern, and fuzzy approaches.

Arman Daliri, Maryam Khoshbakhti, Mahdi Karimi Samadi, Mohammad Rahiminia, Mahdieh Zabihimayvan, Reza Sadeghi
Open Access
Article
Conference Proceedings

Artificial Intelligence for Cluster Detection and Targeted Intervention in Healthcare: An Interdisciplinary System Approach

Early detection of clusters of health conditions is essential to proactive clinical and public health interventions. Effective intervention strategies require real-time insights into the health needs of the communities. Artificial Intelligence (AI) systems have emerged as a promising avenue to detect patterns in health indicators at an individual and population level. The purpose of this paper is to describe the novel expanded application of AI to detect clusters in health conditions and community health needs to facilitate real-time intervention and prevention strategies. Case-use examples demonstrate the capabilities of AI to harness a variety of data to improve health outcomes in conditions ranging from infectious diseases, non-communicable diseases, and mental health disorders. AI systems have been utilized in syndromic surveillance to detect cases of infectious diseases prior to laboratory-confirmed diagnosis. These AI systems can analyze data from healthcare facilities, laboratories, and online self-reported symptoms to detect potential outbreaks and facilitate timely vaccination, resource allocation and public health messaging to mitigate the spread of disease. Similarly, the spread of vector-borne diseases can be anticipated through the analysis of historical data, weather reports and incidence of disease to identify areas to deploy vector control measures. In the area of mental health, AI algorithms can analyze diverse data sources such as social media posts, emergency hotline calls, emergency department visits, and hospital admissions to identify clusters related to mental health issues including overdoses, suicides, and burnout. The timely detection of such clusters enables prompt intervention, facilitating deployment of targeted mental health support services and community outreach programs to address these issues in a targeted and proactive manner. Identifying trends and characteristics in chronic disease data can guide screening and intervention strategies in real time. Similarly, AI can enhance pharmacovigilance by identifying previously unknown patterns in adverse drug reactions to inform regulatory bodies, healthcare providers and researchers in efforts to provide data-driven, real-time patient safeguards. By harnessing data from air-quality monitors, health records, and meteorology reports, AI systems identify correlations between environmental factors and health issues to empower efforts to address specific environmental health risks. These case-use examples illustrate the potential for AI to serve as a valuable tool to facilitate real-time, data-driven insights to inform proactive clinical and public health intervention strategies. Ongoing challenges in harnessing AI technology for public health surveillance include data privacy, accessing quality data from diverse data sets, and establishing effective communication channels between AI systems and public health authorities. The use of anonymized data to detect clusters and identify the health needs of health regions is a potential strategy to mitigate these challenges. Available resources are limited and must be deployed in a targeted, informed, and timely manner to be most effective. The integration of AI into an expanded all-risks approach to syndromic surveillance represents the next step in identifying and responding to clusters of health-related events in a proactive manner that aligns with community needs while upholding ethical standards and privacy considerations.

Patrick Seitzinger, Zoher Rafid-Hamed, Jay Kalra
Open Access
Article
Conference Proceedings

Exploring the Integration of AI in Sepsis Management: A Pilot Study on Clinician Acceptance and Decision-Making in ICU Settings

This paper presents a human factors qualitative study on an AI application for managing sepsis in Intensive Care Units (ICUs). The study involved semi-structured interviews with nine ICU clinicians and nurses across three London hospitals. It consisted of two parts: the first applied methods to understand sepsis resuscitation processes and establish opportunities for the AI tool to mitigate gaps in the process. The second part examined adherence to AI recommendations based on factors like shift timing and user seniority, and whether shared risk in team decisions affects adherence. The findings revealed that while acknowledging the AI tool's potential benefits, participants would require a clear rationale explaining the AI results. They preferred AI suggestions that aligned with their views and did not risk patient safety, often seeking the confirmation of a colleague in uncertain situations. Overall, the study emphasised the cautious, context-dependent acceptance of AI recommendations in ICU settings. It also demonstrated the need for human factors studies to evaluate the user response to AI and its implications on decision-making.

Massimo Micocci, Hannah Kettley-linsell, Shanshan Zhou, Paul Festor, Simone Borsci, Matthieu Komorowski, Myura Nagendran, Anthony Gordon, Peter Buckle, George Hanna, Aldo Faisal
Open Access
Article
Conference Proceedings

Strategic Integration of AIGC in Asian Elderly Fashion: Human-Centric Design Enhancement and Algorithmic Bias Neutralization

The advent of Artificial Intelligence Generated Content (AIGC) has catalyzed transformative shifts in the domain of fashion design, providing novel opportunities for customization and innovation. This research delineates the strategic integration of AIGC within Asian elderly fashion design, critically examining its role in augmenting human-centric design principles while addressing the prevalent algorithmic biases. The objective is to empirically assess the efficacy of AIGC in creating designs that resonate with the functional and aesthetic preferences of the elderly Asian demographic. Employing a mixed-methods approach, the study first delineates the current limitations and potential enhancements AIGC offers to the fashion design process. Through iterative design experiments, AIGC applications are evaluated for their capacity to accommodate the nuanced needs of the target population. Concurrently, a fuzzy evaluation method systematically quantifies the feedback from design practitioners, revealing the salient factors and their relative influence on the AIGC-driven design process. Findings from the study highlight the dichotomy between AIGC's potential for personalized design and the inherent risks of reinforcing biases. The analysis provides a granular understanding of the interplay between AIGC capabilities and user-centered design requirements, emphasizing the necessity for a calibrated approach that prioritizes ethical considerations. The study culminates in a set of actionable guidelines that advocate for the integration of comprehensive educational modules on AIGC technologies in design curricula, aiming to bridge the interdisciplinary gap and enhance designer preparedness. The conclusion underscores the imperative for ongoing scrutiny of AIGC outputs, advocating for the development of robust frameworks that ensure equitable and inclusive design practices. Through this research, a path is charted toward the responsible utilization of AIGC, fostering a fashion industry that is adaptive, empathetic, and attuned to the diverse spectrum of aging consumers in Asia.

Hongcai Chen, Vongphantuset Jirawat, Yan Wang
Open Access
Article
Conference Proceedings

Towards Safer Routes: Exploring the Potential of Artificial Intelligence and Augmented Reality in Children's School Commuting Environment Design

Augmented reality technology is characterised by its mobile, practical, multimodal, and data-driven trend. This paper presents a design framework that employs AI and augmented reality technologies to address an entire process of intelligent perception and design intervention for mitigating risks in children's school commuting environments. For intelligent perception section, 617 accident news texts were used for training purposes utilising natural language processing (NLP) and machine learning techniques. As a result, 6 influential characteristics related to children's school commuting environment (Time, road type, surrounding features, area functional type, user, age type and connection type). The prediction model holds 3 severity degrees and covers 6 types of incidents. For design intervention section, design strategies and techniques are defined based on various degrees. The prototype is designed using a mobile augmented reality device and a smart watch as design touchpoints.

Jinghao Hei
Open Access
Article
Conference Proceedings

Created by Humans & AI?

While looking to expand our digitally produced content, applications, and tools, we consistently pay attention to the rapidly developing AI technology. Major tech companies, Fortune 100, startups are rushing to implement AI in their services and operations to stay on top of the competition. The developed world is changing to put AI at the front of the new wave of technical innovations. Whereas this seems like a logical expansion of the last decade's emphasis on AI progress to increase strategic and operational efficiencies, we are also opening the door to new challenges and potentially regressive societal outcomes. This paper will discuss the options and choices we face while integrating AI into our processes. Primarily, the role of bias in creative processes backed by AI generative tools raises questions about the ethical use of new technology. While AI tools may provide faster innovation processes, will those products reflect sustainable and responsible innovation? There's much to be said about the responsibility we, humans, have to evaluate AI-produced content and products. Will our choices about the present and future be sourced from our positive, creative, and emotional human side and responsibly balanced in the AI algorithms? We will review the practical implementation of generative AI and what to look for to minimize biased or exaggerated results. "What is perhaps at the core of the experiences [and applications] we create, as well as our platform itself, is AI. AI is the runtime that is going to shape all of what we do going forward in terms of the applications as well as the platform advances."- Satya Nadella, Microsoft CEO, speaking at Microsoft's Leading Transformation with AI in central London, May 22, 2018. We are now amid what Nadella was talking about in 2018. As further insight is provided to companies' C-suite and management on building and executing AI/ML models, the industry and academia can create standards and considerations for the future of responsible AI innovation. We will examine these opportunities and solutions to implement in our product development processes.

Jenya Edelberg
Open Access
Article
Conference Proceedings

Using ChatGPT to Support Criminal Investigations: A Comparative Study of AI and Human Query

This paper examines the role of advanced Artificial Intelligence (AI), particularly Large Language Models (LLMs) like ChatGPT, in supporting and enhancing criminal investigations. We focus on the integration of AI in query generation, intelligence analysis, and the interpretation of vast datasets to identify patterns and connections within criminal activities. Through a comparative study involving human participants and ChatGPT, we investigate the effectiveness of AI-generated queries in the 'North by Southwest' scenario, a simulated criminal case involving drug trafficking and money laundering. The ChatGPT study evaluates the AI's ability to generate a coherent investigation strategy and sequence investigative questions effectively. The human study, involving eight female Ph.D. candidates, assesses the strategies individuals employ when reasoning and developing hypotheses from ambiguous information, specifically focusing on three analytical approaches: following money, crimes, and people. Our findings highlight the complementary nature of AI and human analytical approaches. While ChatGPT provides a structured framework for sifting through evidence, human participants offer detailed, situational insights, particularly in connecting financial, criminal, and interpersonal elements. The study underlines the necessity of evaluating the accuracy and reliability of LLMs, considering the ethical implications and potential biases inherent in AI technologies. We conclude that a collaborative approach, utilizing both AI and human intelligence, can lead to more thorough and efficient investigations, ensuring that AI serves as an augmentative tool rather than a substitute for human expertise in the pursuit of justice.

Ahad Alotaibi, Chris Baber
Open Access
Article
Conference Proceedings

The Potential Issues and Crises of Artificial Intelligence Development

Since the time when humans, leveraging 'intelligence,' could contend with and dominate other species on Earth, they have held a dominant position in the relationship with other life forms. The explosive development of artificial intelligence (AI) has ushered in limitless possibilities for human society. Simultaneously, the potential issues and crises stemming from its development accompany a myriad of advantages. This study employs literature review and in-depth analysis to categorize the potential problems and crises of AI development into three levels: 'small, medium, and large.' These levels respectively denote the negative impacts AI brings to humanity, the conflicts between AI and humans, and the potential scenario of AI replacing and annihilating humanity.Building upon this hierarchical classification, the article proposes that addressing minor issues, mitigating moderate-scale problems, and remaining vigilant about major challenges are imperative throughout the AI development process. It underscores the need for humanity to solve small problems, alleviate medium-scale issues, and be alert to significant problems. This calls for a reevaluation of the relationship between humans and AI, an awareness of the existence of the 'singularity' in AI development, and a heightened emphasis on preventing potential crises resulting from uncontrolled and intervention-free AI development.In the realm of 'small issues,' the article discusses how the development of AI has led to a decline in the independence of human thought. This is manifested in weakened social skills, diminished memory capabilities, and a reduced capacity for independent decision-making. Furthermore, the potential replacement of non-technical occupations by AI may contribute to a widening gap in employment and wealth. Issues related to information privacy and security become prominent, particularly in fields like science, medicine, and business, where the extensive use of AI for the analysis of sensitive user information poses inherent privacy risks. Additionally, concerns regarding the monopolization of data analysis and the presence of biases and discrimination in algorithms are significant challenges within the context of AI development.The 'medium issues' encompass discussions about the relationship between humans and AI, as well as the prospective trajectory of human civilization coexisting with AI. In the future, AI may attain a status comparable to humans. Questions arise about whether AI is inclined to continue aiding in human civilization's development, fostering a harmonious coexistence between humans and AI, or if AI will give rise to an independent AI civilization detached from human influence. These considerations present challenges to the existing power structures and discourse systems predominantly shaped by human influence.In addressing the 'major challenges,' the article emphasizes the potential occurrence of an 'AI singularity,' a point in time when machine intelligence comprehensively surpasses human intelligence. This scenario could result in humans losing their understanding and control over AI, facing the threat of becoming a secondary species or even encountering existential risks. The article introduces the concept of a 'quiet' period preceding the AI surpassing human intelligence. During this phase, the substantial benefits derived from AI development may induce apathy and relaxation regarding the potential threat of AI dominance.In conclusion, this article offers a comprehensive and systematic perspective, analyzing potential issues and crises at different tiers in the development of AI. It provides a structured framework for addressing these challenges and calls for vigilance in recognizing the potential threats posed by AI. The article underscores the importance of active intervention in technological development within the humanities, encouraging public participation in establishing a public discourse system. This engagement aims to cultivate a more robust human-AI civilization aligned with human needs and core values.

Lingxuan Li, Wenyuan Li, Dong Wei
Open Access
Article
Conference Proceedings

Towards a framework for digital work engagement of enabling technologies

Increased use of robotization, automation, and artificial intelligence (AI) highlights the need for theory development of the digital work environment since such technologies are likely to significantly alter or even totally change human work practices. This paper focuses on how these technologies influence people’s digital work engagement. In an ongoing project, we study how support from these technologies changes the socio-technical work dynamics and how work engagement can be facilitated in such digital workplaces. The project aims to develop a digital work engagement framework based on input across multiple work domains. The present paper reports on an initial characterization of digital work engagement and presents a synthesis of findings from the first iteration of the envisioned framework. Finally, a discussion on the opportunities and challenges of enabling technologies for the future of work practices is provided

Andreas Bergqvist, Jonathan Källbäcker, Rebecca Cort, Åsa Cajander, Jessica Lindblom
Open Access
Article
Conference Proceedings

Improving product design efficiency through integrated AI tools: an empirical study

In recent years, the rapid development of artificial intelligence models has spawned a variety of artificial intelligence tools, especially those related to image generation. These tools have revolutionized the field of design. This study focuses on the overall process of product design, breaking it down into multiple parts to evaluate the functionality and utility of a range of AI tools. The goal is to test and determine if these tools can effectively facilitate and streamline the product design process.After identifying effective AI tools, conduct comprehensive testing to get the operations and parameters in these AI tools that are more consistent with the product design workflow. The study integrates these optimization operations into the entire product design process, resulting in a fundamental approach. This approach Outlines how these AI tools can work together to improve the efficiency and quality of the entire product design process, aiming to match or exceed the capabilities of human designers. In addition, preliminary experiments have verified the effectiveness of the method, showing that the design efficiency and quality are improved after adopting the integrated AI tool method in the product design process.Main research contents:1 Evaluate various AI tools in multiple parts of the product design process to identify effective solutions.2 Test the parameters and operating procedures of the identified AI tools to achieve the best results.3 Establish basic methods in combination with selected AI tools and conduct experiments to obtain preliminary results.Main research methods:1. In-depth understanding of product design process through in-depth interviews and field observation.2 Use data analysis and professional evaluation to evaluate the effectiveness of AI tools.3. Control variable method was used to design experiments to verify the effectiveness of the established method.

Yubin Zhong, Jiawei Ou, Kaiqiao Zheng, Yan Luximon, Jing Luo
Open Access
Article
Conference Proceedings

Digital transformations and their impact on the economy, public relations and quality of life

This paper traces the digital transformation occurring as a result of the application of computer information and communication technologies. Digital transformation is the digitization of the economy, complete change of the organization's structure, its relationships with the environment in which it operates, and the products and services it creates. Digitalization and related transformative processes lead to the creation of pervasive connectivity between people and institutions, diversification of activities, resources and data in the online space and parallel work in the digital and real worlds. The main goals of the paper are to show that the digital transformations enter all areas of the economy, social life, civil society, which also changes people's quality of life. Digital transformations enable people, businesses and governments to operate efficiently and at lower costs. This creates a huge potential for a large number of enterprises, banks, telecommunications companies, companies providing payment services; start-ups; retailers, as well as institutions in the fields of education, culture, healthcare, politics, etc. Today, even the smallest organization has the opportunity to function as a global one, carrying out cross-border activity in some form. The digital networks that connect everything and everyone span ever larger spaces, so companies, communities and individuals are challenged to rethink what it means to function globally connected. Digital networks are important for the development and promotion of business and communication both at work and in leisure. Therefore, their management is time-consuming and requires both technical and marketing knowledge.The analysis in this article was made within the framework of the project "Quality of Life and well-being in the context of professional communities and their activity" КП-06-ПН80/12, funded by the National Science Fund. The research work is theoretically based on already conducted empirical surveys that track the digital skills of employees in different economic sectors and professions. Five professional communities are studied - teachers; computer specialists and programmers; researchers and university lecturers; technical staff; people employed in trade and services. On the basis of nationally representative surveys for the respective professions, the level of basic and specific digital skills possessed by employees in these professions and the level that the current development of information and communication technologies and the needs of the respective profession require as necessary for the performance of professional activities were identified. On this basis, the need for up-skilling training and the importance of the company's training offerings for enhancing employees' digital skills is highlighted. The role of enhancing employees' digital skills in improving their quality of life is outlined in terms of creating better opportunities for professional and career development, higher incomes and achieving a better balance between work and leisure and work and family life. The article also traces the risks and prospects that digitization creates and that the economy and society face. The main conclusion is that digital technologies contribute to improving the quality of life, as well as to a more economical and efficient use of available resources.

Valentina Milenkova, Albena Nakova, Emilia Chengelova, Karamfil Manolov
Open Access
Article
Conference Proceedings

Application of Artificial Intelligence, Machine Learning and Deep Learning in Piloted Aircraft Operations: Systematic Review

Aviation research on artificial intelligence (AI), machine learning (ML), and deep learning (DL) has seen significant growth as these emerging technologies hold immense potential for supporting both human-centred and technology-centred aspects of civil aircraft operations. This systematic review, following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020, was registered on the Open Science Framework (DOI 10.17605/OSF.IO/ZR7A3) and focused specifically on the use of AI, ML, and DL in human-centric flight operations. The review conducted a comprehensive search of databases including Scopus, Web of Science, IEEE Xplore, as well as online repositories (ResearchGate and Aerospace Research Central) to identify relevant articles published between 2013 and 2023. In total, 32 studies were included, which explored various applications of AI, ML, and DL in aircraft pilots and flight operations. The studies were categorized into four main areas: (i) assessment and management of human factors risks, including AI-assisted data analysis of pilot performance, crew resource management, and ML-based support for pilots’ cognitive workload monitoring, (ii) detection of human errors, with support systems based on ML-based approaches for real-time monitoring and DL models for biometric monitoring of cockpit pilots were identified for the detection of human errors in flight safety, (iii) reduction and prediction of human errors, categorized into AI-assisted predictive analytics in flight accidents, and ML-based pattern recognition to predict unstable approaches, and (iv) prevention of human errors in aviation through ML utilization for pilot training enhancement, and AI-supporting flight automation and decision support systems for flight operation. Analysis of the included studies revealed a rising trend in the publication of articles after 2020, albeit at a slow rate. It is worth noting that the majority of studies focused on conceptual applications, with fewer studies involving empirical testing. The findings of this review highlight the potential for future research in developing and testing improved human factors risk assessment (HRA) models assisted by computational intelligence in piloted aircraft operations, with the ultimate aim of enhancing flight safety.

Steven Tze Fung Lam, Alan H.S. Chan
Open Access
Article
Conference Proceedings

Ethical Reflections on Computationally Enabled Design in the Age of Digital Intelligence

As technology continues to evolve and computer technology continues to improve, computational empowerment has become an integral part of our lives and work. When humans and machines begin to merge, we must be ready to embrace a whole new philosophy of life. In the field of design, the application of computing technology also provides designers with more tools and resources to create new works more efficiently, but computationally-enabled design also brings a series of ethical issues. In this regard, designers should seriously examine and think about the real function and value of computationally empowered design, and establish a design ethic that adapts to the needs of the times, in order to guide AI in a positive direction to promote social progress. This paper discusses the ethical issues and coping strategies of computationally empowered design in the age of digital intelligence mainly from three aspects and also three spatial and temporal dimensions.

Yijie Jia, Yi Shu, Jiamin Bu
Open Access
Article
Conference Proceedings

A systematic review of changing conceptual to practice AI curation in museums: Text mining and bibliometric analysis

The rapid development of artificial intelligence (AI) algorithms has accelerated the global digitization of museums. This study was conducted to clarify conceptual change to practice by applying a systematic literature review to a combination text mining and bibliometric analysis technique to visualization network. Based on the study selection articles from Web of Science(WOS). Our research questions focused on revealing the interconnected network of digital museum collections, expert knowledge and algorithms, and recommendation systems. The findings showed that 288 articles were finally selected to be analyzed.Conceptualizing AI curation in museums is currently underwayincombining AI with museum curatorial knowledge and innovate the practice mode of public participation in museum AI curation With emphasis on the the exchange of domain knowledge process. Moreover, three dimensions to consider including (1)design dimension focus on Methods and approaches for curating museum artificial intelligence exhibitions, (2) learning dimension focus on iterative development of new algorithm models guides the practice of intelligent curation,and (3) standard dimension focus on assessment and evaluation inpublic participation in curating museum cultural heritage exhibitions. In addition, the museum and AI community will mutually benefit. In particular, the convergence of new technologies and the exchange of domain knowledge would result in fairer and safer applications in the future as a result of learning from one another's flaws.

Shengzhao Yu, Jinghong Lin, Jun Huang, Yuqi Zhan
Open Access
Article
Conference Proceedings