Human Interaction and Emerging Technologies (IHIET-AI 2025): Artificial Intelligence and Future Applications

Editors: Tareq Ahram, Antonio Lopez Arquillos, Juan Gandarias, Adrian Morales Casas
Topics: Artificial Intelligence & Computing, Human Systems Interaction
Publication Date: 2025
ISBN: 978-1-964867-37-3
DOI: 10.54941/ahfe1005890
Articles
Using compact Retrieval-Augmented Generation for knowledge preservation in SMBs
Knowledge preservation is a critical challenge for small and medium-sized businesses (SMBs). Employee fluctuation and evolving work tasks create a permanent risk of knowledge and experience loss. Therefore, SMBs need effective and efficient strategies for knowledge retention. As most knowledge in companies is primarily encoded as language or text, large language models (LLMs) offer a promising solution for the preservation and utilization of knowledge. However, despite their strengths, their adoption and deployment are challenging. To address this issue, we propose a system based on the Retrieval-Augmented Generation (RAG) concept that combines small, locally run language models with traditional retrieval algorithms to significantly enhance the process of knowledge preservation and utilization by reducing search efforts.
Erik Schönwälder, Martin Hahmann, Gritt Ott
Open Access
Article
Conference Proceedings
The role of Artificial Intelligence (AI) applications in Aviation Risk Management
The aviation industry is inherently complex, demanding rigorous risk management to ensure safety and operational efficiency. Artificial Intelligence (AI) has emerged as a transformative force, revolutionizing traditional practices and augmenting human decision-making capabilities. This paper explores the multifaceted applications of AI in aviation risk management, emphasizing its potential to enhance safety protocols, predictive analytics, and operational resilience. It analyzes AI-driven solutions, including machine learning, natural language processing, and computer vision, and their integration into risk assessment, hazard detection, and mitigation strategies. The study identifies three key areas where AI significantly impacts aviation risk management. First, predictive maintenance leverages machine learning algorithms to analyze aircraft data, enabling the early identification of mechanical issues and reducing unplanned downtimes. Second, AI-powered air traffic management systems utilize real-time data processing and optimization techniques to mitigate collision risks, improve route efficiency, and adapt to dynamic conditions. Third, natural language processing tools are employed to enhance pilot training and communication by analyzing patterns in incident reports and cockpit recordings, addressing human factors that contribute to aviation risks. In addition to operational benefits, this paper highlights the challenges associated with adopting AI technologies in aviation. Issues such as data privacy, algorithmic bias, and regulatory compliance are explored to underscore the need for ethical AI practices and robust governance frameworks. Furthermore, the paper examines case studies showcasing successful AI implementations in aviation risk management, including AI-driven safety audits and autonomous drones for runway inspections. These examples illustrate the transformative potential of AI while emphasizing the importance of human oversight to ensure reliability and accountability. The findings of this research underscore that AI is not merely a supplementary tool but a cornerstone of the next generation of aviation risk management strategies. By fostering collaboration between AI technologies and human expertise, the aviation industry can achieve unprecedented levels of safety and efficiency. This paper concludes by proposing a roadmap for the sustainable integration of AI into aviation risk management, advocating for multidisciplinary research, continuous learning systems, and regulatory harmonization to navigate the industry's evolving challenges.
Debra Henneberry, Dimitrios Ziakkas, Florian Doerrstein
Open Access
Article
Conference Proceedings
On the Lack of Phishing Misuse Prevention in Public Artificial Intelligence Tools
Phishing remains one of the most common and effective forms of social engineering, with cybercriminals constantly refining their tactics to exploit human vulnerabilities. The sheer volume of phishing attacks is staggering: almost 1.2% of all emails sent are malicious. This equates to around 3.4 billion phishing emails per day. The effectiveness of phishing attacks is also underlined by numerous studies. Phishing is identified as the leading initial attack vector, responsible for 41% of security incidents. This means that practically every company is threatened by phishing attacks.In parallel, there have been rapid advances in the field of artificial intelligence (AI) in recent years, giving the general public access to powerful tools that can handle complex tasks with ease. However, alongside these benefits, the potential for abuse has also become a major concern. The integration of AI into social engineering attacks has significantly increased the opportunities for cybercriminals. Research has shown that AI-generated phishing emails are difficult for humans to distinguish from real messages. According to one study, phishing emails written by AI were opened by 78% of recipients, with 21% clicking on malicious content such as links or attachments. Although the click-through rate is still lower compared to human-crafted emails, generative AI tools (GenAI) can help cybercriminals compose phishing emails at least 40% faster, which can lead to a significant increase in phishing success rates. The increasing potential to use public AI tools for abusive purposes has also been recognized by the developers of AI models. Thus, publicly available AI tools often have built-in mechanisms to detect and prevent misuse. This paper examines the potential for misuse of publicly available AI in the context of phishing attacks, focusing on the content generation phase. In particular, the study examines the effectiveness of existing abuse prevention mechanisms implemented by AI platforms like fine-tuning, filters, rejection sampling, system prompts and dataset filtering. To this end, it is explored how prompts to the AI need to be altered for circumventing the misuse preventing mechanisms. While in some cases the simple request to write a phishing email succeeds, other AI tools implement more sophisticated mechanisms. In the end, however, all prevention safeguards could be circumvented. The findings highlight the significant threat posed by AI-powered social engineering attacks and emphasize the urgent need for robust defense in depth strategies against phishing attacks and increased awareness to mitigate the risks in the evolving digital landscape.In addition, the paper demonstrates that the quality of the AI tool varies in terms of the phishing emails generated. To this end, the phishing emails generated by circumventing the protection mechanisms of the AI are (subjectively) compared and evaluated by the authors. The preliminary conclusion is that automatically generated phishing emails of some public AI tools can certainly match that of manually generated emails. While the objective confirmation of this hypothesis requires further study even the subjective quality of the generated phishing emails shows the dimension of the problem.
Alvaro Winkels, Marko Schuba, Tim Höner, Sacha Hack, Georg Neugebauer
Open Access
Article
Conference Proceedings
Cost-Effectiveness of the "Digital Air Traffic Controller"
Our paper analyzes the economic cost-effectiveness of "Digital Air Traffic Controller" (Digital ATCO), an AI-supported system developed by the Project “Collaboration of aviation operators and AI systems” (LOKI) of the German Aerospace Center (DLR). In Europe, delays due to air traffic control constraints—particularly staffing shortages—impose annual costs of around €1.9 billion on airlines. Additionally, the annual employment costs for air traffic controllers reach approximately €2.9 billion. With air traffic on the rise, these costs are anticipated to grow further. Currently, two air traffic controllers work collaboratively to manage a sector. The Digital ATCO system aims to significantly reduce the workload of air traffic controllers. This innovation reduces staffing requirements, allowing reallocation of personnel to address shortages or build capacity where needed. Our findings indicate that the benefits of Digital ATCO are very likely to outweigh its costs, demonstrating strong potential for economic and operational efficiency in air traffic management.
Martin Jung, Florian Wozny, Maximilian Engel
Open Access
Article
Conference Proceedings
Another AI - Analog Intelligence
Al is often thought of as something artificial, i.e., the man made world. However, we should remember that prompts are important in Al. In short, what Al did is it simply expanded search space due to advances in computing machines. Decision-making on how to set up the search space and how to search, that is, the strategy, is instructed by humans. The same is true for generative Al. A decision must be made as to how to carry out generation and it is humans that make such decisions. The current computing is based on 0-1, so digitalization is emphasized, but the real world is analog. Digitalization streamlines processing and enables objective, quantitative evaluation. Because of the emphasis on digitalization, the brain is attracting attention. As humans play a large role in Al, let us think about humans. It is generally believed that humans make decisions using their brains. But brains structure body experience into knowledge. Thus, knowledge is very much personal and there is a delay.However, even in our daily lives, the real world is constantly changing. To accurately grasp these situations and respond appropriately, we need wisdom, not knowledge. Furthermore, the real world is analog.e is true for generative Al. A decision must be made as to how to carry out generation and it is humans that make such decisions. Humans are made up of cells. Digitalization is being discussed at a cellular level, so to speak. But to discuss what we should do to cope with the continuously changing real world and make appropriate decisions, we should consider at the level of humans. Then, how can we process analog world with wisdom? The current industrial Society is product-centric. So, quantitative, objective approach is important. But as Maslow pointed out, human needs shift from material needs to mental needs with time and at the last stage, we pursue "Self- actualization".. We will be pursuing our own personal capabilities. This is subjective and qualitative. In short, we should move from Euclidean to Non- Euclidean approach. Mahanalobis who is a researcher of design of experiments develooped Mahalanobis Distance (MD) to remove outliers. But from another perspective, MD prioritize our decisions. In other words, it shifts our pursuit from how to what and why. i.e., from tactics to strategy. In our previous research, we succeed in immediate detection of emotion from face introducing cartoon pattern approach. Its basic idea is to introduce MD and pattern to make appropriate decision for analog objects. Here, this idea of Analog Intelligence is described.
Shuichi Fukuda
Open Access
Article
Conference Proceedings
Human Resource Information System and Operational Efficiency among the Professional ICT Providers in Nigeria.
The challenges of knowledge-based economy required ideas and expertise where creative and innovative workforce are of greater values. This is because efficient and effective management of human capital is becoming increasingly an imperative and complex process. Therefore, organizations should increase the gathering, storing, and analyzing information regarding their Human Resources (HRs) through the use of Human Resource Information System (HRIS) in order to increase the operational efficiency. IT (information technology) and IS (information system) are part of HR functions mainly to develop, and use for better HRM programmes that have direct significance on HR functions which provides HR professionals with opportunities to enhance their contribution to the strategic direction of the firm. With the changing world and evolution of new technology, meeting this information requirement becomes important. HR managers need to be aware that the change in technology will not only increase the quality of employee information, but also will have a strong effect on the overall effectiveness of the organization. Thus, the HR professionals need to participate and contribute fully to their organizations, as true strategic business partners using the HRIS proxies. This research addresses the knowledge gap regarding the impact of HRIS on private organizations' efficiency in Nigeria, especially the Information Communication Technology (ICT) Professional providers by focusing on recruitment and application tracking, workforce analytics insights, and system functionalities. The study provided insights into the influence of these technologies on time savings, employee productivity, and the reduction of wastages. To achieve these objectives, survey research design was adopted using online questionnaires to solicit for data among Professional ICT providers in Nigeria. Hypotheses are formulated and tested at specific level of significance using inferential statistical tool to investigate the impact of HRIS on operational efficiency. A sample size of employees, responsible for HRIS operations in selected organizations, was chosen using cluster sampling technique. Data was collected through a structured questionnaire with Likert 5-point rating scales. Regression analysis was employed for hypotheses testing, and the results reveal significant impacts. The first hypothesis demonstrates that recruitment and application tracking systems positively impact time savings in Nigerian organizations, contributing to operational efficiency. The second hypothesis establishes a substantial positive influence of workforce analytics insights on employee productivity, emphasizing the strategic importance of HRIS in enhancing workforce effectiveness. The third hypothesis emphasizes the role of system functionalities in reducing wastages, aligning with operational efficiency goals. In conclusion, the study highlights the critical role of HRIS in enhancing operational efficiency in Nigerian ICT private organizations. The findings emphasize the practical relevance of recruitment and application tracking, workforce analytics insights, and system functionalities in shaping time savings, employee productivity, and wastage reduction. The research contributes valuable insights for organizations seeking informed decision-making regarding the adoption and optimization of HRIS for improved performance and efficiency.
Kabiru Genty, Moshood Elusope, Efigenia Semente
Open Access
Article
Conference Proceedings
AI Support for Establishing and Operating an Information Security Management System (ISMS)
The increasing complexity of information security threats and ever more stringent legal requirements mean that more and more organizations are setting themselves the goal of implementing an effective and efficient information security management system (ISMS). This paper examines the ways in which artificial intelligence (AI) in the form of a chatbot can support the development and operation of an ISMS. In particular, it evaluates how a chatbot can be integrated into standard setup and operating processes within an ISMS. In addition, various possible applications are shown and advantages, disadvantages and limitations are discussed. It turns out that the use of a chatbot as a supporting tool has many advantages and, in the hands of specialist personnel, offers a useful addition to established methods. Consequently, chatbots open up the possibility for organizations to optimize their organizational and operational processes.
Florian Großimlinghaus, Marko Schuba, Tim Höner, Sacha Hack, Georg Neugebauer
Open Access
Article
Conference Proceedings
Evaluating Training Acceleration through Selective Workload Skipping: Methods and Benchmarks
Work-skipping methods accelerate neural network training by selectively skipping work that is deemed not to contribute significantly to learning. The goal of such methods is to reduce training time while incurring little or negligible reduction in output accuracy. We identify a “blind-spot” in current best-practice methodologies used to evaluate the effectiveness of work-skipping methods. Current methodologies fail to establish objective ways of determining whether a time reduction vs. accuracy drop trade-off is indeed beneficial. We propose a set of guidelines for evaluating the effectiveness of workload skipping techniques. Our guidelines emphasize the importance of using wall clock time, comparing with random skipping baselines, incorporating early stopping or time-to-accuracy measures, and utilizing Pareto curves. By providing a structured framework, we aim to assist practitioners in accurately determining the true speed advantages of training acceleration algorithms that involve workload skipping. To illustrate the appropriateness of our guidelines we study two work-skipping methods: GSkip, which skips complete layer’s gradient computations and weight updates based on their relative changes, and DeadHorse, which selects data samples for backpropagation according to output confidence. We demonstrate how our methodology can establish when these methods are indeed beneficial. We find that on many occasions, random skipping, early termination, or hyperparameter tuning may be as effective if not more.
Kareem Ibrahim, Milos Nikolic, Nicholas Giamblanco, Ali Hadi Zadeh, Enrique Torres Sanchez, Andreas Moshovos
Open Access
Article
Conference Proceedings
The Digital Trust Radar – A structured collection and analysis of global AI guidelines
The concept of Digital Trust can be utilized to classify and assess the responsible design, implementation and use of artificial intelligence (AI) technologies. Laws, standards, and guidelines are essential as they support the establishment of procedures that promote responsible AI technologies and therefore broad added value, societal acceptance and public confidence in AI. This contribution introduces the ´Digital Trust Radar´, a structured digital repository synthesizing seventy-eight guidelines, standards and laws relevant to establish responsible AI in organizations. Through a systematic approach, these documents were categorized and analyzed based on various criteria including authorship, geographic focus, intended audience, AI application domain, AI type, and governance alignment. The findings reveal significant variability in the scope and thematic focus of AI related laws, guidelines, or standards, emphasizing ethical, legal, and technical considerations. Our categorization scheme provides a comprehensive overview of international approaches to support AI governance for responsible AI and serves as a valuable resource for stakeholders navigating the complexities of AI design, integration and usage.
Janine Jäger, Jona Karg, Petra Maria Asprion, Ilya Misyura
Open Access
Article
Conference Proceedings
GoodMaps Indoor Navigation: Leveraging Computer Vision to Foster Indoor Navigation
People around the world have come to rely on digital maps–accessible with the tap of a smartphone screen–to guide them places, to locate areas with which they are unfamiliar, and to get directions to desired destinations. In a world that is overflowing with technological advancements, we expect to be shown, in real time, how to get from point A to point B. However, navigating indoors using assistive technology can prove more challenging than navigating outdoors, as indoor spaces have not traditionally been mapped the way outdoor spaces have by companies like Google and Apple. Additionally, most indoor mapping and navigation services are not very accurate or accessible to meet the needs of a variety of users, including individuals with disabilities. Founded by American Printing House for the Blind (APH) in 2019, the GoodMaps Indoor Navigation smartphone app is addressing these challenges by leveraging artificial intelligence (AI) and crowd-sourced data to scale its app and map coverage efficiently. GoodMaps uses AI–primarily through computer vision algorithms–to create highly accurate indoor maps and to enable precise navigation for users. The app allows them to navigate complex indoor spaces like airports, university campuses, and more by providing detailed turn-by-turn directions. GoodMaps’ computer vision technology interprets real-time data from a device's camera and sensors, effectively guiding users through the environment even without visual sightlines. This is particularly valuable for people with visual impairments or mobility challenges who need assistance navigating intricate indoor spaces. Currently, AI-driven translation capabilities enable the app to support four languages, with six more planned by early 2025, automating 90% of destination translations for non-English users. Looking ahead, GoodMaps aims to further harness AI to dynamically adapt to changes in indoor environments. AI-powered image recognition will enable automated detection of environmental updates via user device cameras, while crowd-sourced contributions from users will provide real-time feedback similar to Waze. This combination of AI and community-driven updates will streamline map maintenance, improve accessibility, and set a new standard for scalable indoor navigation solutions. GoodMaps is also partnering with Intel to deliver a high-quality indoor wayfinding solution for people who are blind or visually impaired. Safely and effectively navigating indoor spaces results in greater independence and confidence when traveling. Intel continues to investigate volumetric mapping algorithms and advances in artificial intelligence to improve the precision and accuracy of GoodMaps’ commercial indoor wayfinding service. This paper will explain GoodMaps’ user-centered design process, chronicling how user experience research has informed the development of AI-driven computer vision models to address user requirements.
Jennifer Palilonis, Charlie Meredith
Open Access
Article
Conference Proceedings
Wi-Fi Signal Analysis via Smartphones for Estimating Passenger Counts
Smartphones have become integral to daily life, offering innovative applications across various domains. This study introduces a novel method for counting passengers by analyzing Wi-Fi signals emitted by their mobile devices. The research evaluates the effectiveness of leveraging Wi-Fi data to estimate occupancy, addressing a critical issue in public transportation management. The proposed system involves three core processes: signal detection, data filtering, and passenger count estimation. Key results indicate high accuracy in moderately crowded scenarios, with average deviations of 20% from actual counts and accuracy rates between 90% and 100%. However, under high-density conditions, the system tends to overestimate, occasionally doubling the real count. While further research is required to improve precision in such settings, this study lays a foundation for leveraging digital technologies to enhance transportation operations and service delivery.
Mohammed Alatiyyah
Open Access
Article
Conference Proceedings
Behind the AI-Scenes: How FinTech Professionals Navigate Regulations and Privacy Concerns to Enhance User Experience
By 2030, global financial technology (fintech) revenues are expected to surpass $1.5 trillion US dollars, driven by the increasing adoption of digital financial services worldwide (eMarketer, 2023). The rapid advancement of artificial intelligence (AI) has significantly contributed to the fintech industry thrust (Kasmon et al., 2024) which in turn, has radically transformed the financial sector (Sahabuddin et al., 2023; Jang et al., 2021). The literature has established that fintech can be effective in improving how customers experience financial service and product offers (Gupta et al., 2023). Despite the hype over fintech technologies, the successful design and development of fintech solutions is still a challenge for many B2B and B2C businesses (Kasmon et al.,2024), and even more so because ethical and regulation requirements for user protection are key to adoption (Israfilzade and Sadili, 2024; Heeks et al., 2023). Fintech professionals involved with product management play a crucial role as intermediaries between developers and clients in the successful implementation of such digital innovations (Jang et al., 2021; Mogaji and Nguyen, 2021). While financial interactions involve a great deal of sensitive information sharing (e.g., credit card, account number, investments), users become increasingly vulnerable and concerned with their privacy when using fintech applications (Rjoub et al., 2023; Siddik et al., 2023). Several studies have examined the consumer’s perspective when adopting fintech products or services, but very few have investigated the perspective of fintech professionals (Hassan et al., 2023). This research aims to better guide fintech professionals in the design and development of digital fintech solutions, while ensuring adherence to legal requirements for customer protection considering the Canadian financial environment. To do so, this project aims to understand the practices and elements that define the relationship between fintech companies and their customers. This study relied on semi-structured interviews conducted with six fintech professionals involved with the design, development, regulation compliance, and governance of AI/digital solutions in the financial sector (4 in B2B and 2 in B2C; 4 men and 2 women). Participants were professionally titled either as CEOs, lawyers specialized in AI digital governance and fintech director. The discussion guide covered three main topics: their relationship with their clients, regulations constraints and best practices. Interviews were virtually conducted and transcribed. NVIVO was used for data categorization and coding and the qualitative analysis followed the procedure advocated by Gioia et al. (2013) to ensure qualitative rigor. The findings show that (1) compliance is central to fintech, with significant resources being invested in ensuring legal adherence and transparency (2) striking a balance between innovation and reliability is a challenge to maintain customer relationship and (3) focusing on privacy by design is a key concern, since customers are demanding higher levels of clarity, transparency and control over their personal data without compromising on the user experience. This study makes a significant contribution to the understanding of the fintech specific practices and challenges, recommending that fintech firms adopt detailed privacy policies to govern, manage, share and properly secure data to meet regulatory as well as customer expectations.
Massilva Dekkal, Sandrine Prom Tep, Manon Arcand, Maya Cachecho
Open Access
Article
Conference Proceedings
Optimizing Resource Allocation and Traceability in Human-Centered Design (HCD)
This study builds on previous work conducted by the researcher using a model-based framework to implement systems engineering (SE) practices and processes into a digital environment. Findings of a human-centric design (HCD) approach to system development include the optimization of resource allocation. By focusing on individual capabilities, transparency is built and employees are positioned for success. Upon incorporating these aspects, the results are anticipated to be increased traceability throughout the operational lifecycle to improve overall project management (PM). This paper builds on the findings of initial efforts to include additional model elements and the relationships between them using the Unified Architecture Framework (UAF). Results will then be assessed for applicability to actual SE processes in a digital environment.
Sarah Rudder
Open Access
Article
Conference Proceedings
Navigating Shared Space: A Preliminary Field Study Analyzing Pedestrian Path Modifications in Response to Autonomous Sidewalk Robots
As autonomous robots become more common in urban environments, understanding interactions with pedestrians is crucial for ensuring smooth human-robot coexistence. Serve Robotics, a leader in sustainable urban logistics, operates autonomous sidewalk robots in West Hollywood, California, USA delivering food and packages. These robots, guided by sensors and AI, navigate pedestrian-heavy sidewalks. Their presence raises questions about how pedestrians adjust their walking paths to accommodate the robots. This preliminary field study was designed to gather initial insights into these behaviors, laying the groundwork for a larger, statistically significant study to be conducted in the future.Guided by Proxemics Theory, which examines how individuals manage personal space, this study investigates pedestrian responses to Serve Robotics’ delivery robots. The research question focuses on how pedestrians modify their walking paths when encountering these robots. By documenting observable changes in pedestrian movements—such as veering, slowing down, stopping, or moving closer to the edge of the sidewalk or street—the study identifies patterns influenced by environmental factors like sidewalk width, pedestrian density, and proximity to the robot.Conducted as a preliminary observational field study, the researcher acted as a non-participant observer, documenting interactions between pedestrians and robots in real-world conditions on West Hollywood’s sidewalks. Data were collected on key behaviors, including deviations in path, stops, and speed changes in response to robot movements. Additionally, environmental factors such as time of day, weather, and pedestrian density were recorded. Video recordings, with appropriate consent signage, were used to analyze specific pedestrian behaviors and accurately measure deviations from original paths.This preliminary study utilized both quantitative and qualitative methods. The quantitative analysis specifically measured the frequency and degree of swerving, where pedestrians veered off their original walking path. It also tracked the number of stops or pauses pedestrians made in response to the robots, as well as changes in walking speed, such as slowing down or speeding up when approaching the robots. Additionally, the study recorded proximity of pedestrians to the robots, quantifying how close they allowed the robots to come before altering their path. Data were collected under various conditions, including sidewalk width (narrow versus wide) and pedestrian density (low, moderate, or high). These metrics helped identify behavioral patterns, such as whether narrower sidewalks or faster-moving robots resulted in more significant path deviations or stopping. Qualitative thematic analysis categorized pedestrian responses—such as avoidance, curiosity, or neutrality—based on observational data.Findings from this initial study provide important insights into how autonomous robots impact pedestrian traffic flow. Key behavioral trends identified in this phase of the research can inform the design of human-centered robots that navigate urban environments without causing pedestrian discomfort or creating bottlenecks. Serve Robotics and similar organizations may benefit from adjusting robot speed and proximity to pedestrians or implementing signaling mechanisms to reduce pedestrian path alterations. Insights also suggest potential municipal policies regarding where and how robots should operate on sidewalks to minimize disruptions.Initial findings underscore the need for more extensive research to develop a fuller understanding of pedestrian responses to autonomous robots. A larger study will build on the groundwork established here, aiming to help optimize the design and integration of autonomous systems into public spaces, enhancing pedestrian experiences and advancing the adoption of autonomous technologies. The findings from both this preliminary research and a future larger study will contribute significantly to the broader Human-Robot Interaction field and inform the development of robotic systems designed for more seamless interaction in real-world settings.
Robert Marohn
Open Access
Article
Conference Proceedings
Child-Friendly Human-AI Interaction: Designing Tangible User Interfaces for Preschool Children to Prompt Generative AI
Generative artificial intelligence (AI) encompasses advanced computational models that can generate coherent and high-quality content when prompted such as text, images, videos, tunes and codes, by leveraging the patterns and structures present in the data they have been trained with. Popular examples of commercial tools using such models are ChatGPT (text-to-text), Midjourney (text-to-image) and Vizcom (image-to-image). Current use cases are largely developed for general or professional use purposes, which are not suitable for child users due to unregulated content, lack of child-friendly use cases or age-appropriate interaction modalities. Therefore, we propose tangible user interfaces (TUIs) as a potentially suitable approach that bridges the physical and digital world by utilising physical interaction to engage with computers. Interaction with TUIs involves physically manipulating technologically augmented objects to control the digital output. Compared to graphical user interface (GUI) which relies on indirect manipulation via “windows, icons, menus, pointer” (WIMP), TUIs provide direct manipulation, hence are more straightforward. For these reasons, TUIs are considered developmentally appropriate for preschool children, who seek concrete interactions due to still-emerging abstract thinking and fine motor skills. This paper presents a case study, a six-week design project carried out in an undergraduate industrial design programme, during which students were expected to design tangibles to create child-friendly digital content. Throughout the project, students developed TUIs for prompting generative AI to create interactive experiences that are developmentally appropriate and engaging for preschool children. The outputs were conceptual designs that made use of TUIs for media generation such as composing songs, writing stories, creating artwork and virtual environments. In this paper, we present the educational frame and sample conceptual design outputs. We discuss potential design strategies to consider while developing child-friendly human-AI interactions, such as customisation scenarios, parental roles, and balancing physical and digital interactions. Our work contributes to the user-centred development of future technologies that offer meaningful, engaging and safe experiences for children.
Sedef Süner Pla Cerda, Batuhan Şahin, Ecem Kumbasar
Open Access
Article
Conference Proceedings
Privacy Concerns in Recommender Systems for Personalized Learning at the Workplace: The Mediating Role of Perceived Trustworthiness
Artificial intelligence (AI) is capable of reconfiguring activities in Human Resource Management (HRM), including talent acquisition, performance management and learning and development (Minbaeva, 2021). The integration of AI into HRM systems can optimize processes, such as comprehensive needs assessments for learning and development, which would otherwise be lengthy and time-consuming. Moreover, the integration of AI in HRM has the potential to enhance decision-making processes and employee experience (Strohmeier, 2020). The use of big data and personal information in AI-based HRM systems to provide employees with personalized learning recommendations gives rise to privacy concerns. These concerns must be addressed in order to guarantee a responsible and calibrated use of these technologies. In the event that users express concerns regarding the adequate protection of their personal information by the system, they may perceive the system as untrustworthy and, consequently, refrain from using the system. In the context of privacy concerns, trust(worthiness) is assumed to be one of the most crucial predictors of behavior (e.g., intention to use a system). However, the explicit role of perceived trustworthiness in the relationship between privacy concerns and the intention to use an AI-based system has yet to be demonstrated. The aim of the present study was to investigate whether there exists a mediating effect of perceived trustworthiness on the relation between privacy concerns and the intention to use an AI-based recommender system for workplace learning. An online experiment was developed to simulate such a system. The analysis of this study is based on data of 69 participants (employees, 29 female, age M = 33.28 years, SD = 10.49) from one of the two experimental conditions, in which they were permitted to determine which personal information to provide for a personalized learning recommendation. The mean interaction time with the recommender system was 43.23 minutes (SD = 18.64). The participants completed questionnaires addressing a range of different constructs, including perceived trustworthiness, privacy concerns and intention to use. Contrary to previous studies postulating privacy concerns as a predictor of privacy behavior, the analysis showed no direct effect of privacy concerns on intention to use the system (B = -0.001, p > .05). However, the results indicated that privacy concerns significantly predicted perceived trustworthiness (B = -0.170, p < .05), which in turn significantly predicted the intention to use the system (B = 0.936, p < .01). Therefore, privacy concerns exert an indirect influence on the intention to use the system through perceived trustworthiness. The results underscore the significance of perceived trustworthiness in the context of privacy concerns and the intention to use an AI-based recommender system for workplace learning. This study represents a preliminary step towards addressing the research gap on the role of trust(worthiness) in the context of privacy concerns, as proposed by previous studies. Implications can be derived for the design of human-centered recommender systems for workplace learning, taking into account increasing perceived trustworthiness and reducing privacy concerns. Future research should continue to investigate additional factors in the relationship of privacy concerns, attitudes and behavior, for instance, perceived control over personal information.
Marina Klostermann, Lina Kluy
Open Access
Article
Conference Proceedings
Human Interactions with Holocaust Survivor AIs: Current and Future Applications of Visitors’ Interactions with Holocaust Survivor “Holograms”
Currently in use at over a dozen museums worldwide, pre-recorded interviews with individual Holocaust survivors incorporate specialized display technology and natural language processing to generate interactive conversations between survivors and visitors. These non-generative AI recordings, created by the USC Shoah Foundation Dimensions in Testimony (DiT) project, are prepared to answer well over 1000 possible questions visitors might ask of them. These current-day interactions with DiT recordings of Holocaust survivors are indebted to a cadre of historians who recognized it was vital to gather the testimony of those who were persecuted. As this paper will demonstrate, for the past eight decades historians, archivists, and technology specialists have worked persistently and creatively to collect, to preserve, and to provide access to the eyewitness testimonies of those who survived the Shoah.
Cayo Gamber
Open Access
Article
Conference Proceedings
Integrating Artificial Intelligence into the Human-Centered Design Process: Enhancing Creativity and User-Centricity in Architectural Education
This paper examines the role of Artificial Intelligence (AI) within the Human-Centered Design (HCD) framework in architectural education, focusing on its implementation in the MArch program at Xi'an Jiaotong-Liverpool University. Over the last five years, AI has been embedded into the HCD's three core phases: Hear, Create, and Deliver, enhancing creativity, decision-making, and problem-solving.In the Hear phase, AI analyses user-generated data to guide design decisions. In the Create phase, AI emerges as a co-designer, offering innovative ideas and collaborating with human designers to refine design concepts. This partnership is crucial for boosting creativity. In the Deliver phase, AI aids in refining designs by optimising technical and aesthetic aspects through simulations and feedback loops.The integration of AI has notably improved creativity, efficiency, and user-focused outcomes, paving the way for more inclusive and sustainable designs. However, challenges such as ethical concerns and the need to balance AI's analytical capabilities with the intuitive aspects of design remain.Reflecting on AI's journey from experimental use to an integrated tool in HCD, this paper is a starting point for further research to enhance AI's predictive capabilities and its role in preparing students for complex future architectural challenges.
Juan Carlos Dall'asta
Open Access
Article
Conference Proceedings
Bridging the gap: workshop results on the interaction between human creativity and artificial intelligence
In the expansive realm of contemporary Artificial Intelligence (AI) technologies, designers and architects are challenged to collaborate synergistically with these powerful tools. While the potential of AI is considerable, it also gives rise to significant questions regarding the intellectual property of generated works and the nature of interaction between humans and technology. Furthermore, this technologies challenge the definition of creativity, prompting the question of how to distinguish the role of a human from that of a machine. This prompts the question of how the transition from human ingenuity to automation can be achieved in a way that doesn’t compromise the uniqueness of human contributions in fields such as creativity and the arts. It is therefore necessary to assess how far tasks can be delegated to AI without compromising the value of the inputs that humans are able to provide.Responses to these questions can be identified through a comprehensive examination of recent technological developments that promote informed and conscious use. These reflections inspired the organisation of the Alter Ego Symposium and Workshop, which took place in April and May 2024 at the Department of Architecture and Design at the University of Genoa. The symposium aimed to encourage Italian PhD students and researchers to consider the potential benefits of integrating AI into academic research. The workshop instead, addressed at department students, focused on exploring image generation technologies and their ability to translate verbal prompts into detailed visual representations. The workshop consisted of two phases. The first phase featured presentations by academic experts who introduced fundamental concepts of AI and analyzed existing generative image tools. This provided students with a comprehensive overview of various AI types, their applications, and their limitations. In the second phase, participants focused on the practical application of the knowledge acquired. Based on a contribution presented at the Alter Ego Symposium, students were required to create two works representing its content. The first work was created using traditional tools like paper and pencil, while the second employed generative AI technologies. The students employed the OpenAI Copilot system to generate an image that was as similar as possible to the original hand-drawn one. Through a process of iterative modification of prompts, the students came to understand the importance of precision in language and word choice for achieving satisfactory visual results.The final comparison of outcomes, which will be explored in detail in the full paper, highlighted the significant role that AI tools can play in supporting design and concept development processes. However, it also emphasised the necessity for designers to communicate effectively and accurately with these systems in order to achieve results that meet user expectations. Currently, while AI is capable of processing natural language, it still encounters difficulties in autonomously interpreting the full range of semantic nuances present in the provided prompt, particularly when imprecise terms are used. Despite the rapid advancements in AI through self-learning algorithms, the point at which humans can be entirely replaced in design and creative content generation remains distant.
Isabella Nevoso, Elena Polleri, Caterina Battaglia
Open Access
Article
Conference Proceedings
The Impact of Human Implication for AI-Supported Decisions over Perception of Trust, Agency and Dignity
Recent developments in artificial intelligence (AI), more specifically in generative AI, are disrupting our life. The integration of generative AI raises questions pertaining not only to the performance and accuracy of the AI system, but also to the boundaries of the role of both human and AI. This calls for a better understanding of the perception of human dignity over different uses of generative AI, but also for comprehending how said perception may interact with trust into the AI and sense of agency. The goal of the current study was to evaluate the perception of human dignity, trust and sense of agency among different uses of AI-supported decisions depending on the context of use and on the level of implication of the human decision maker. We presented participants a series of vignettes where generative AI systems were used to support decision making in five domains of use (health, business, humanities, arts, and technology) and four types of support (for decision support, communication, creativity, and research). The level of human implication regarding the decision was also manipulated across two conditions. Sense of agency, trust in the AI, perception of appropriateness for the AI to make a decision, as well as interpersonal justice and dehumanization level measures were collected for each vignette. Results outlined that sense of agency differed across conditions. Domain of use influenced sense of agency, trust in the AI, decision appropriateness and dehumanization perceptions, with differences emerging mostly for health-related vignettes. The type of support also impacted trust and decision appropriateness, with more positive perceptions for vignettes discussing creativity use cases. Overall, our study sheds light on the perception of the general population over different types of AI use and how components such as perception of agency, trust and dignity may vary depending on the nature of the use.
Camille Zinopoulos, Adam Fahmi, Sophie Boudreault, Alexandre Marois
Open Access
Article
Conference Proceedings
Interplay of capability and personality when cooperating with autonomy
Based on the prior exploratory results a positive relationship with technology has an association with algorithmic thinking skills. Furthermore, a compounding effect of this relationship and higher algorithmic thinking skills could have an effect task performance with unmanned autonomous ground vehicles. In this paper a further analysis is necessary to take into consideration the accuracy of this subjective measure compared to objective data from the experiment. There is also a connection between task performance and personal attributes. This paper studies the interplay between personality, algorithmic thinking and performance with autonomy. The rich data is also discussed, and methodological implications related to combining different types of data are brought about. The results are derived from simulated combat scenarios where squad and platoon leaders utilized the UGV’s as part of the defending force. Data consists of interaction data from the UGV user interface, UX surveys, and performance data and background data of the participants. The participants of the experiment consisted of 431 conscripts, 27 commissioned officers and 37 armored reserve officer students all from the armored brigade of Finland. The experiments were run during May and June 2024.
Jussi Okkonen, Mia Laine, Christian Andersson
Open Access
Article
Conference Proceedings
Casualty evacuation process comparison of single patient evacuation with unmanned ground vehicles to multiple carrier evacuation from conflict zones
Optimization of casualty evacuation from conflict zones aims at increasing the chance of a critically wounded soldier or civilian reaching life-saving care, minimizing secondary damages, and maximizing the utilization of available emergency medical resources. With the emergence of small, (autonomous) unmanned ground vehicles (UGVs), the initial evacuation away from the frontlines could be possible earlier and in a near-continuous fashion. This study evaluates the limits of increased efficiency of employing autonomous evacuation UGVs capable of transporting one patient at a time. Only the initial, combat evacuation to the battalion area of service, or combat nurse’s station is considered. The baseline information, such as ranges of distances, was obtained from a live simulation experiment, where participants consisted of 431 conscripts, 27 commissioned officers and 37 armored reserve officer students all from the armored brigade of Finland. The experiments were run during May and June 2024. The participants were divided into groups, and each group completed 4 conflict scenarios. In half of the scenarios, the evacuation UGV was remotely operated, and in half of the simulations it was implemented as a fully autonomous and mature system with a wizard of Oz method. The results of this paper give estimates of a sufficient number of continually operating evacuation UGVs necessary to evacuate 100 casualties within 60 minutes, and estimated differences in cost-effectiveness compared to an evacuation vehicle with a larger capacity.
Mia Laine, Jussi Okkonen, Svante Laine, Christian Andersson
Open Access
Article
Conference Proceedings
An Alternative Approach to Distributed Data Communication Systems
In today's increasingly interconnected world, the demand for efficient, resilient and fault tolerance distributed data communication systems is paramount. This research explores a novel alternative approach to address the challenges of traditional distributed systems. The study investigates the integration of cutting-edge technologies, such as decentralized networks, blockchain, and Software-Defined Wide Area Network (SD-WAN), to revolutionize data communication. This alternative approach aims to enhance system efficiency, scalability and reliability while reducing vulnerabilities associated with centralized systems. By leveraging decentralization principles, networking automation approaches and distributed ledger technology, it prioritizes data efficiency, integrity, security, presenting a transformative vision for network infrastructure. This research contributes to the ongoing discussions about distributed data transmission systems. It opens up a new perspective and paves the way for future achievements in this field.
Zhanna Gabbassova
Open Access
Article
Conference Proceedings
Defining autonomous functionalities of narrow artificial intelligences for a defensive unmanned ground vehicle to enhance human-UGV teaming performance for defending forces
There is a growing need to integrate artificial intelligence (AI) into military systems, particularly unmanned vehicles (UxVs). In this paper, the application of AI in defensive combat scenarios involving UGVs is explored. The report is based on an extensive quasi-experiment (n= 458) in a simulation environment. The experiment employed the Wizard of Oz methodology to simulate autonomy in Laykka unmanned ground vehicles (UGVs) within the Virtual Battle Space 4 (VBS4) platform, conducted in collaboration with the Finnish Defense Forces (FDF). This paper explores the potential applications of AI in defensive combat operations. Participants included conscripts and enlisted staff officers, with five operators managing the UGVs. The simulation involved participants taking roles in a defending platoon supported by 16 autonomous UGVs, and in an attacking mechanized infantry company. A total of 48 scenarios were conducted, with data collected through questionnaires, mock graphical user interfaces, qualitative interviews, and scenario event analysis. This paper focuses particularly on the aspects of autonomy and its possible uses learned from the simulation, thus being an explorative study. Questionnaire and simulation data collected from users, operators, and observers is utilized to identify potential requirements and optimal locations for the integration of autonomy in UGVs. The findings highlight the necessity for highly structured command inputs when deploying AI in military contexts. Furthermore, the study suggests that AI is not always essential, and when utilized, it should be restricted to specific, well-defined tasks and functions.
Christian Andersson, Mia Laine, Jussi Okkonen
Open Access
Article
Conference Proceedings
Development of initial data for AI Models for the Improvement of Computer Workers Health Status
To assemble the AI model for implementation for office workers’ occupational health improvement, different data connected with work environment and human body ailments are needed. The current paper gives the data of 116 office workers health disturbances (musculoskeletal disorders) measured with myotonometry. The Nordic musculoskeletal questionnaire was used. For the initial data for AI, the scientific literature review is given. The results contain the data from myotonometry and VAS pain scale. Work-related musculoskeletal disorders are the most common workplace health hazards. The results show that trapezius muscle’s stiffness had high numbers, otherwise the thumb muscle’s stiffness was low, considered with the patients with occupational disease. On the basis of measurements and questionnaire analysis the model for AI initial data was compiled which consists of 3 parts: 1) operating working environment factors influencing on people, influence of them to the organ systems, functional stages of occupational disorders, loss of work capacity; 2) computer software on computer workers; 3) possible preventive actions. At the end of the paper the recommendations for managing of workplace ergonomics are given. Balneotherapy is one of the possible rehabilitation methods in Estonia.
Piia Tint, Viiu Tuulik, Ada Traumann, Viive Pille
Open Access
Article
Conference Proceedings
Cultural Differences in Perception and Engagement of AI-generated Online Ads
AI-generated advertising media are fascinating for online advertising, as they can be used to achieve a high degree of personalization at a low cost. However, they also introduce unique challenges. The seamless integration of text and visuals, the ability to capture and retain audience attention, and the effectiveness of AI-generated content in diverse cultures are all areas that require in-depth understanding. This understanding is crucial as companies increasingly rely on AI to enhance their marketing efforts.In this study, we examine cultural differences in the perception of and interaction with Instagram advertising, which was created using generative AI. We used 75 people from Columbia and 41 from Austria to investigate how these two groups differ.For this purpose, an application was developed that generates ads for Instagram based on the GPT4 and DALLE-3 AI systems that can create Text and Images. To further define the intended demographic for the advertisements, a persona generator was developed to generate basic user profiles. Both target groups were then surveyed using a further application that presents a structured questionnaire. Six ads for five target groups, i.e., 30, were created and presented to the test persons, followed by the questionnaire.The questionnaire asks things such as the clarity of the message, the trustworthiness of the ad, whether it is visually appealing, whether it matches the interests, whether it attracts attention, whether it would be interacted with, etc., on a 5-point scale. In addition, free text questions are asked about which elements of the ad encourage interaction and which emotional responses there are. Specifically, the analysis aimed to investigate differences in engagement, visual appeal, relevance, and other factors that could influence the perception of the ads.To achieve this, a T-test was conducted to determine the significance of differences in the answers to each question in the questionnaire. Additionally, a separate analysis focused exclusively on ads related to “Skiing” and a “Hotel in the Alps.” This was done to see if filtering for elements culturally significant to Austrians would yield significant response differences.The cross-cultural analysis showed numerous significant results in how Colombians and Austrians evaluated AI-generated advertisements. For example, Colombians consistently determined that the ads were more visually appealing than Austrians. Colombians ranked culturally relevant advertising, such as those about skiing and the Alps, higher in visual appeal, catching attention more successfully. Colombians were more inclined to interact with the ads regarding engagement in the general comparison. However, this difference in interaction likelihood became less pronounced when only culturally specific ads were considered.Interestingly, while the general comparison showed no significant difference in overall quality ratings between the two groups, the filtered analysis for culturally specific ads revealed that Colombians rated the overall quality higher than Austrians. This suggests that Colombians found the culturally relevant ads more enjoyable overall. However, both groups' perceptions of clarity, credibility, and relevance remained similar, with no significant differences observed, indicating that these aspects were less influenced by cultural context. The capabilities of the AI models constrained the study used, specifically GPT 4 and DALL-E 3. These models, while advanced, still need to be improved in fully understanding and replicating human creativity, particularly in areas such as appropriate design, cultural references, and the integration of text and visuals. The lack of interaction between the text and image generation phases is a limitation that often results in inconsistencies between the content and visuals. Overusing certain words in the captions or over image text is a constraint of the model. It sticks to words like “Enhance” and “Elevate” regardless of the product or service, which deteriorates the quality of the final output. Integrating text and image generation models could significantly improve the coherence and quality of AI-generated ads. Using tools like ChatGPT-Vision to offer feedback on the generated DALL-E image to GPT could be a step forward in automating the whole process. The analysis was based on responses from participants from two cultural backgrounds (Austrian and Colombian). While this allowed for some cross-cultural insights, the sample size and cultural diversity were limited, which may affect the general result of the findings. Future studies with a more diverse participant pool could provide a broader understanding of how different cultures perceive AI-generated content.
Andreas Stöckl, Daniel Diaz
Open Access
Article
Conference Proceedings
AI-Powered Auditory Control and Augmented Reality Interfaces for UAVs - A Contactless Control and Situation Awareness Concept
Unmanned Aerial Vehicles (UAVs) are increasingly utilized in military and civilian tasks like search and rescue, however, traditional operation methods can be risky in hazardous situations. This article presents a novel UAV control concept leveraging artificial intelligence (AI) and Augmented Reality (AR) technology, allowing operators to manage drones without handheld devices through audio-based input and output. The suggested system employs headsets and AR glasses to provide real-time visual feedback, enhancing situational awareness and decision-making by displaying critical data such as UAV position and detected hazards within the operator's field of view. The concept comprises five key components implemented within the Robot Operating System (ROS): Audio Input, Task Allocation, UAV Control, Situation Picture, and Output Units. Speech is processed using models such as Whisper, and commands are interpreted by a Large Language Model (LLM) like GPT-4, ensuring accurate recognition even in noisy environments. Initial experiments show high command recognition accuracy, indicating the concept's potential for reliable UAV control in real-world scenarios. Overall, this approach aims to improve operational efficiency and safety in UAV operations, with future work focusing on system refinement and advanced language processing.
Joshua Gehlen, Alina Schmitz-hübsch, Sebastian Handke, Wolfgang Koch
Open Access
Article
Conference Proceedings
Construal Level Theory (CLT) for designing explanation interfaces in operational contexts
Explainability is essential to fostering trust, transparency, and effective Human-AI Teaming (HAT) in high-stakes operational contexts where humans interact with complex AI systems. This paper presents the application of Construal Level Theory (CLT), a psychological framework, to design explainability interfaces in safety-critical contexts where the quantity of information and the time required to process it are critical factors. The CLT was originally developed to explain how individuals mentally construe objects and events at different levels of abstraction based on psychological distance (temporal, spatial, or social). The CLT has since been applied in the design of user interfaces, where it serves as the theoretical framework to structure information retrieval systems so that users can progressively query data at different levels of abstraction. Building on this foundational work, our contribution extends the CLT’s application to design explanation interfaces tailored to operators of AI systems used in six aviation use cases, including cockpit, air traffic control tower and airport operations. Our use of the CLT framework addresses key explainability questions in such systems: What information should be presented? When should it be shown? For how long? and At what level of detail? This paper outlines the design methodology and demonstrates its application in one Use Case where an Intelligent Sequence Assistant (ISA) is being developed to support and enhance decision-making for Air Traffic Controllers. ISA optimises runway utilisation in single-runway airports, providing real-time sequence suggestions for arriving and departing aircraft. These operational suggestions are accompanied by text-based explanations for all the sequence changes, structured according to the CLT in various levels of detail. Controllers can progressively query these explanations (e.g. by interacting with dedicated sections of the interface) to access the desired level of detail, build situational awareness, and understand the assistant’s reasoning. While the CLT provides a framework for structuring the information and the interaction with the system, it does not prescribe how the information should be visually presented on the Human-Machine Interface (HMI), leaving this decision to the designer.
Roberto Venditti, Narek Minaskan, Evmorfia Biliri, Miguel Villegas, Barry Kirwan, Carl Westin, Jekaterina Basjuka, Simone Pozzi
Open Access
Article
Conference Proceedings
Enhancing Android Security Through Artificial Intelligence: A Hyperparameter-Tuned Deep Learning Approach for Robust Software Vulnerability Detection
Detecting software vulnerabilities is essential for cybersecurity, particularly in Android systems, which are widely used and vulnerable due to their open-source nature. Conventional signature-based malware detection methods are inadequate against sophisticated and evolving threats. This paper introduces a Hyperparameter-Tuned Deep Learning Approach for Robust Software Vulnerability Detection (HPTDLA-RSVD) aimed at enhancing Android security through an optimized deep learning model. The HPTDLA-RSVD methodology encompasses min-max data normalization, feature selection using the Ant Lion Optimizer (ALO), classification via a Deep Belief Network (DBN), and hyperparameter optimization with the Slime Mould Algorithm (SMA). Experimental evaluations on a benchmark dataset reveal that HPTDLA-RSVD surpasses existing techniques across multiple performance metrics, confirming its efficacy in identifying and mitigating software vulnerabilities on Android platforms.
Mohammed Assiri
Open Access
Article
Conference Proceedings
Technology Innovation of Artificial Intelligence in Building Sector: Present Status and Challenges
As one of the least digitalized industries in the world, the building and construction sector has faced great challenges in sustainable growth. The high-fragmented structure and high threshold for R&D investment has prevented the building and construction industry from swift technological innovation. In many industries, artificial intelligence (AI) is producing revolution, e.g., retail, telecommunications, and helps make profits, improve efficiency, security and safety. But application of this advanced technology to building sector seems largely fall behind. AI is considered able to assist waste reduction by decision making on complexity, assist energy management (e.g., identify the black hole of energy consumption during operation, and data mining and machine learning of big data to optimize scenario for sustainability or enable real-time feedback and regulation during operation) in building and construction industry. Earlier research on technological innovation in Yangtze River Delta has revealed that AI has less than 10 records of patent filing in the dataset and has rarely mixed with other technologies so far. Different from other technologies that state owned enterprises more or less have a role in the knowledge production, applicant in the field of AI is mainly private in nature – the known companies are from Zhejiang. In view of these inadequacies, a broader look at how this technology is being used at greater geographic sphere is in need. This research broadens the search of patent applications in AI in the field of building construction to reveal the panorama of how this technology has been applied across the globe. It generates insights into the potential of AI in building industry and opens discussing forum for future.
Lingyue Li
Open Access
Article
Conference Proceedings
Deploying a Transformer-based Model in Microservices Architecture: An Approach for Real-Time Body Pose Classification
Real-time body pose classification is essential in preventing injuries caused by repetitive strain or poor ergonomics. In industrial environments, ensuring worker safety often requires monitoring the poses of multiple individuals performing different tasks. However, analysing the movements of many workers simultaneously presents computational challenges, potentially impacting accuracy and latency. In this context, microservices architecture offers significant advantages for enabling individual application functionalities to operate independently. Also, this architecture allows systems to scale efficiently in response to specific workload demands by adding CPU, memory, and storage resources, improving system performance and resource efficiency.This study evaluates the scalability of a real-time body pose classification system deployed using a microservices architecture, comparing it against a traditional monolithic approach. The system used a transformer-based model designed to monitor awkward body positions and identify constraints in joint movements. The methodology involves offline training on sequential data representing body joint angles, collected using an IMU sensor-based motion capture (MoCap) system. The system streams joint angles wirelessly from participants performing logistic tasks, such as lifting and carrying sandbags, in an industrial setting.Once trained, the classification model is deployed in real-time, processing streaming data for live body pose classification. Inference results from both architectures are stored in a time-series database for performance analysis. Scalability tests were conducted by deploying services for varying numbers of participants (one, three, five, and ten) in parallel across both architectural setups. Data throughput, latency, and resource utilisation (CPU and memory usage) were monitored during load testing.The results show that the microservices architecture outperforms the monolithic architecture in scalability. When scaled to accommodate multiple participants, it achieved higher data throughput, reduced latency by 18-48%, and decreased CPU usage by 18-44%. These findings validate the effectiveness of microservices architecture in enhancing the performance and scalability of real-time body pose classification systems.The method involves offline training of a transformer model on sequential data of body joint angles from participants performing a logistic task of lifting and carrying sandbags in an industrial environment. The data was collected from multiple participants using an IMU sensor-based motion capture (mocap) system that streams body joint angles wirelessly. Data was cleaned, labeled, and processed to train the transformer-based deep learning model. A data streaming pipeline from the mocap suit was set up, which streams the body joint angles in a Kafka topic in real-time. Once trained, the classification model is deployed online, receiving real-time streaming data for body pose classification. In the microservices-based application, the data obtained in the server is processed through a data processing microservice and later through an inference/classification service that predicts the real-time body pose for the logistic task. On the other hand, the data is processed, and inference is computed on the processed data in the same monolithic code. The inference results from both architecture-based applications are saved in a time-series database for performance evaluation.The scalability performance of the developed application was tested by deploying services for one, three, five, and ten participants in parallel under both architectures. The data throughput, latency, and resource utilization (CPU and memory usage) were monitored during the load testing. The results demonstrated that microservices architecture provides higher data throughput, reduces latency by 20%, and lowers CPU consumption by 40% compared to the monolithic architecture when scaled to multiple participants. The test results validated the scalability benefits of microservices architecture for a real-time body pose classification application.
Enrique Bances, Vedant Dalvi, Urs Schneider, Thomas Bauernhansl
Open Access
Article
Conference Proceedings
Effects of uncertain knowledge in water level prediction using an LSTM Neural Network
This article endeavours to demonstrate how uncertainty in the knowledge base and input data of artificial neural networks affects the accuracy of their predictions. In this paper, we introduce a new approach dealing with the omnipresent prediction error of machine learning methods. Our approach consists of specifically identifying and decreasing uncertainty in various scenarios in the knowledge base and database to increase the accuracy of the model forecasts. The data manipulation experiments in this paper prove that uncertainty in the model forecasts can be measured by observing the change in the prediction error. The use case is a water level prediction model for a closed harbour basin based on a Long short-term memory neural network. Our model, developed using standardised AI modules, predicts future water levels based on historical data and thus optimises energy efficiency and logistical processes for a tide-independent industrial port. Various scenarios for the origin of uncertainties in the datasets are simulated through the targeted manipulation of the historical dataset. We were able to show the significant impact of uncertainty on accuracy, which supports the idea of dealing with uncertainty to enhance artificial neural networks in logistic processes.
Thimo Florian Schindler, Tammo Francksen, Jan-Hendrik Ohlendorf
Open Access
Article
Conference Proceedings
CRNSim: A New Similarity Index Capturing Global and Local Spectral Differences in Hyperspectral Data
Hyperspectral imaging (HSI) enables detailed spectral analysis across numerous bands, offering transformative potential in diverse domains such as remote sensing, agriculture, and medical diagnostics. However, the inherent challenges of inter-class similarity, intra-class variability, and limitations in existing similarity metrics hinder its effectiveness. To address these challenges, we propose CRNSim, a novel similarity index that integrates three complementary components: a Chebyshev-based term to capture extreme spectral deviations, a RMSE-based term to account for global spectral trends, and a nonlinear adjustment factor to enhance sensitivity to subtle variations while mitigating outlier influence. Experimental evaluations on benchmark hyperspectral datasets, including Indian Pines and Salinas Valley, demonstrate the superiority of CRNSim in improving inter-class separability, outperforming traditional metrics such as Chebyshev and RMSE. These findings highlight CRNSim’s potential to advance HSI analysis methodologies, making it a robust tool for fine-grained spectral differentiation across diverse applications.
Jungkwon Kim, Sangmin Kim, Jungi Lee, Kwangsun Yoo, Seokjoo Byun
Open Access
Article
Conference Proceedings
Security in Information Systems with Artificial Intelligence: Development of AI - based threat detection systems to protect information integrity
This investigation aims to evaluate the progress in artificial intelligence (AI) threat detection systems, to strengthen information-system security, which has been achieved by guaranteeing integrity, confidentiality, and availability of information. The results further highlight the use of AI to proactively detect and mitigate unseen attacks before they cause any real damage. Using a qualitative methodology as an investigatory framework, this research reflects an interpretative fine-tooth analysis of the literature to provide evaluation of how AI has been used in prevention and containment. The study also discusses different AI methodologies like machine learning, deep learning, neural networks and how they provide a boost in threat detection. A literature review provides a critical examination and analysis of former studies, highlighting patterns, trends, and new methodologies that underscore the ongoing maturation and intricacy of Internet related threats balancing cybercrime AI countermeasures.High-level takeaways from the results Here are some of the chief findings about how AI can protect your network: General adaptation: When attacking patterns shift away from traditional deterministic rules detection, cyberthreats remain hidden from the human eye. The report also explores the use and implementation of AI along with other security systems including encryption and blockchain to establish multiple defence layers. The integration noticeably enhances the resilience of information systems, making them stronger against APTs and Zero-day attacks.It also focuses attention on the wealth of qualitative data with which to explore more nuanced aspects of information security. This complete view helps to bring further clarity to both the issues and potential of AI-driven security solutions. These results underscore the importance of addressing how to use AI as a critical component in building successful digital security information protection strategies and should argue for its wider implementation throughout industries. The research concludes by recommending further exploration of AI’s role in predictive analytics and automated incident response, areas where its potential remains underutilized
Nelson Salgado Reyes
Open Access
Article
Conference Proceedings
Can We Trust Them? Examining the Ethical Consistency of Large Language Models to Perturbations
The increasing reliance on Large Language Models (LLMs) raises a crucial question: can these powerful AI systems be trusted to make ethical choices? This study presents an analysis of LLM ethical behavior, examining 25,200 queries across 24 different models, including both proprietary and open-source variants. We evaluate LLM responses to 70 ethical vignettes spanning six domains, employing a novel perturbation methodology to assess the robustness of their ethical decision-making under varying contexts and framing. Our findings reveal that while larger models generally exhibit higher consistency, particularly with Chat-style instructions, significant variations emerge when faced with contextual changes, stakeholder adjustments, and across different ethical domains. To explain these findings, we introduce a novel framework— extit{survival-relevant pattern recognition}—which argues that ethical behavior in both humans and AI arises from recognizing and responding to patterns associated with survival and social cohesion.
Manuel Delaflor Rodrguez, Cecilia Delgado Solorzano, Carlos Toxtli
Open Access
Article
Conference Proceedings
An AI-driven Ukrainian History web platform
The AI-driven Ukrainian History web platform offers an innovative way for users to engage with the nation’s rich history. By integrating artificial intelligence, natural language processing (NLP), and geospatial analysis, it presents historical events, significant locations, and notable figures in an interactive and visually engaging format. The platform systematically gathers historical data using tools like Scrapy for web scraping and Tesseract OCR for digitizing scanned documents. While noisy or degraded documents may affect accuracy, the availability of high-quality sources ensures reliable data extraction. Fine-tuned NLP models, including transformers like BERT and RoBERTa, process the data to identify and categorize key entities such as dates, locations, and names of historical figures. Contextual summarization ensures the extracted information is both accurate and easy to understand. Geospatial data is managed with PostGIS, an extension of PostgreSQL, and visualized using Leaflet.js. An interactive map interface enables users to explore events by location and time period, with filters for categories like political milestones or cultural events. The backend, built on PostgreSQL, ensures scalability and performance, while development in Visual Studio Code streamlined integration across components. This platform not only preserves Ukraine’s cultural heritage but also demonstrates the potential of modern technology to transform historical education, offering an intuitive way to connect with the past and explore its influence on Ukraine’s landscape and culture.
Valentyna Kolomiets, Pedro Oliveira, Paulo Matos
Open Access
Article
Conference Proceedings
Generating Realistic Traffic Scenarios: A Deep Learning Approach Using Generative Adversarial Networks (GANs)
Traffic simulations are crucial for testing systems and human behaviour in transportation research. This study investigates the potential efficacy of Unsupervised Recycle Generative Adversarial Networks (Recycle–GANs) in generating realistic traffic videos by transforming daytime scenes into nighttime environments and vice-versa. By leveraging Unsupervised Recycle-GANs, we bridge the gap between data availability during day and night traffic scenarios, enhancing the robustness and applicability of deep learning algorithms for real-world applications. GPT-4V was provided with two sets of six different frames from each day and night time from the generated videos and queried whether the scenes were artificially created based on lightning, shadow behaviour, perspective, scale, texture, detail and presence of edge artefacts. The analysis of GPT-4V output did not reveal evidence of artificial manipulation, which supports the credibility and authenticity of the generated scenes. Furthermore, the generated transition videos were evaluated by 15 participants who rated their realism on a scale of 1 to 10, achieving a mean score of 7.21. Two persons identified the videos as deep-fake generated without pointing out what was fake in the video; they did mention that the traffic was generated.
Md Shadab Alam, Marieke Martens, Pavlo Bazilinskyy
Open Access
Article
Conference Proceedings
The Role of Artificial Intelligence (AI) & Future Applications in the Implementation of Aviation Fatigue Risk Management System
Fatigue in aviation operations is a critical issue affecting safety and operational performance. Traditional Fatigue Risk Management Systems (FRMS) rely heavily on subjective reporting and retrospective data, limiting their effectiveness in real-time fatigue detection and mitigation. The integration of Artificial Intelligence (AI) offers transformative solutions through predictive analytics, real-time monitoring, and machine learning algorithms, enhancing FRMS capabilities. Integrating AI into FRMS introduces unprecedented capabilities in monitoring, predicting, and mitigating fatigue risks. AI-powered tools leverage real-time data from diverse sources, including biometric sensors, flight schedules, environmental factors, and operational logs, to deliver actionable insights. Machine learning algorithms analyze historical patterns and operational data to identify high-risk scenarios, enabling predictive fatigue modeling. Such tools enhance the ability to forecast fatigue hotspots, allowing for proactive mitigation strategies, such as dynamic crew scheduling and workload redistribution.Computer vision and natural language processing (NLP) technologies also provide innovative methods for monitoring behavioral indicators of fatigue, such as speech patterns, facial expressions, and task performance during pre-flight checks or in-flight operations. AI also contributes to resilience by automating the continuous evaluation of fatigue management policies. Adaptive systems can recommend adjustments to policies and practices based on evolving data trends, ensuring compliance with regulatory standards while optimizing operational efficiency. Furthermore, AI facilitates personalized fatigue management by tailoring interventions to individual crew members' physiological and operational profiles, improving effectiveness and crew well-being. This paper explores the limitations of current FRMS approaches and discusses AI's role in advancing fatigue risk management using wearable technologies, predictive models, and decision-support systems. It examines ethical considerations, regulatory challenges, and a comparative analysis of FAA, EASA, ICAO, and IATA standards. The findings highlight AI's potential to transition fatigue management from reactive to proactive strategies, fostering a safer and more efficient aviation environment.
Dimitrios Ziakkas, Debra Henneberry, Konstantinos Pechlivanis
Open Access
Article
Conference Proceedings
Diversity of Perception in Human-AI Collaboration
Two key approaches to building AI systems are Model-Centric AI (MC-AI) and Data-Centric AI (DC-AI). When AI systems are deployed in real-world environments, they become part of a socio-technical ecosystem, interacting with humans, processes, and other systems. This interaction often occurs in hybrid teams, where humans and AI collaborate to achieve shared objectives. However, human influences, at any stage, can lead to suboptimal outcomes, such as model drift or reduced performance. In fact, human introduces variability, as personal experience, biases, and decision-making approaches can significantly impact outcomes. Changing one human in the process can alter the results dramatically. This paper review processes involved into building, deploying, monitoring, and maintaining AI-systems and discusses human influences at each step, the potential risks that may arise and the main skills necessary to avoid human’s negative influences. By incorporating perception diversity and tolerating ambiguity, the computing-with-perception framework enhances human-AI collaboration, enabling systems to manage complexity and ambiguity in human-AI collaboration considering real-world problems.
Mohamed Quafafou
Open Access
Article
Conference Proceedings
Integrating Generative AI in Design Education: A Structured Approach to Client-Centered Interior Design Visualization
Generative AI (GAI) is reshaping the future of work in architecture by introducing innovative ways for humans to interact with technology, transforming the design process. In education, GAI offers students immersive environments for iterative exploration, enabling them to visualize, refine, and present design concepts more effectively. This paper investigates how GAI, through a structured framework, can enhance the learning of design tasks in elaborating interior design proposals, and preparing students for the evolving professional landscape. Drawing on the platform Midjourney, students explored concepts, material moodboards, and spatial compositions, simulating professional scenarios. Each student was assigned a real client and tasked with developing tailored design solutions, guided by client and tutor feedback. This approach demonstrates how GAI supports the development of future-oriented skills, directly linking education to the technological shifts in professional practice (Araya, 2019). The study adopts a practice-based methodology, documenting the outcomes of an interior design workshop where students employed GAI tools to develop client-specific proposals. Students engaged in role-playing, meeting their assigned clients face-to-face to gather requirements, acting as junior architects. They analyzed client feedback to inform the design phase, after which they used a structured framework for better using GAI to iteratively refine their proposals. By generating AI-assisted visualizations of spatial configurations and materials, students developed final design solutions that aligned with client expectations. Data from GAI iterations, client feedback, and tutor evaluations were used to assess how effectively AI tools contributed to producing professional-quality designs (Schwartz et al., 2022). Two research questions frame this investigation: (1) How does Generative AI enhance students' ability to create client-specific interior design solutions, from concept generation to final visualization, within a structured educational framework? (2) How does the integration of GAI tools impact the teaching of iterative design processes in architecture, particularly in preparing students for the future of work in the profession? The findings reveal that GAI significantly improved students' design outcomes by enabling them to visualize and refine their proposals based on real-world scenarios. GAI facilitated the exploration of current trends and supported the creation of material moodboards and space visualizations. The iterative nature of AI tools allowed students to better grasp the relationships between spatial configurations, design choices, and client needs. Their final proposals, incorporating AI-generated outputs, were praised for their conceptual clarity and technical precision, reflecting how AI-driven processes can transform traditional workflows (Burry, 2016). This study illustrates the transformative potential of GAI in architectural education, particularly in fostering dynamic human-technology interactions. By leveraging AI, students maintained control over outputs while transforming abstract concepts into client-ready designs. Moreover, the iterative feedback loop enabled by GAI promoted a more adaptive and responsive learning process, giving students real-time insights into their design decisions. These insights reflect broader changes in the future of work, where AI-driven tools will become integral to professional practice. Future research could explore expanding GAI’s role in more complex design stages, such as schematic design and development, building on the benefits observed in this study. References:Araya, D. (2019). Augmented Intelligence: Smart Systems and the Future of Work. Springer.Burry, M. (2016). The New Mathematics of Architecture. Thames & Hudson.Schwartz, J., Hatfield, S., & Monahan, K. (2022). Designing Work for a Generative Future: AI’s Role in Shaping Creative Professions. Deloitte Insights.
Silvia Albano, Gianmarco Longo
Open Access
Article
Conference Proceedings
Impact of Generative AI on the Acquisition of Competencies in Educational Institutions of the Vienna Chamber of Commerce and Industry: GenAI in Future Education
The aim of this research project is to develop scientifically based recommendations to support the educational institutions of the Vienna Chamber of Commerce and Industry (WKW) in effectively integrating Generative Artificial intelligence (GenAI) into their programmes. The educational institutions include the University of Applied Sciences for Management and Communication – FHWien der WKW) for the tertiary sector, two schools (Tourism College MODUL Vienna and Vienna Business School – VBS) in the field of high school education and a further education institution for further adult education (Institute for Economic Promotion – WIFI Vienna). The project examines the current perception of AI in the labour market and investigates the impact of Generative AI on students' competency acquisition, as well as strategies for its effective utilisation in educational settings. The central research question of this project is as follows: What strategies might be employed to ensure the effective utilisation of Generative AI in the educational institutions of the WKW, with a view to enhancing learners' competency acquisition and integrating it meaningfully into future education? To evaluate the present utilisation of GenAI among educators and learners, the study will employ questionnaires, workshops and practice-oriented experiments. Based on the findings of this research, an AI Info Hub will be established as a central resource platform. Its purpose is to provide educators and learners with up-to-date information, best practices, workshops, and support for integrating AI into teaching and learning processes. By comprehensively understanding and addressing the challenges and opportunities of AI in education, this project will empower educational institutions of the WKW to promote the acquisition of competencies that enable effective human-AI collaboration. Ultimately, this will contribute to improving the quality of education and preparing learners for future work environments in which AI is an integral component.
Patrick Rupprecht, Isabel Rodenas, Tilia Stingl De Vasconcelos Guedes
Open Access
Article
Conference Proceedings
Certificates and the security of digital health information
The digitization of healthcare information has expanded access to medical data while raising concerns about its security, authenticity, and trustworthiness. This paper explores the role of digital certificates in addressing these challenges, focusing on their potential to verify the credibility of health information and protect sensitive data. It begins with a theoretical overview, emphasizing the importance of certificates in ensuring data authenticity and integrity, particularly in compliance with regulations such as the GDPR.The analysis examines current certificate models like HONcode and PIF TICK, highlighting their limitations in public awareness and practical application. Innovative technologies such as blockchain and zero-knowledge proofs are identified as promising tools for enhancing the security and traceability of health information. Blockchain’s immutability and decentralized verification capabilities, combined with patient-controlled data access via smart contracts, underscore its potential in fostering trust and compliance with privacy standards.The paper outlines essential certification requirements, including technical efficiency through machine learning, content accuracy based on scientific validation, and process transparency. Furthermore, user-centric approaches are emphasized to enhance certificate accessibility and public trust. The study also examines parallels in other industries, such as food and finance, which employ rigorous certification systems for safety and reliability.Ultimately, this research advocates for a hybrid certification model combining automated and expert-driven processes. By leveraging modern technologies and interdisciplinary practices, such a model can address the dual goals of ensuring high-quality health information and fostering user trust in the digital healthcare landscape.
Christoph Jungbauer, Christian Luidold
Open Access
Article
Conference Proceedings
Image-based mandrel detection during stent production in an industrial environment
Of the 985,572 deaths in Germany in 2020, 121,725 people died due to coronary heart disease (CHD). Therefore, CHD is the most common single cause of death in Germany. It is caused by a narrowing of the coronary vessels. This narrowing will lead to an insufficient supply of oxygen and nutrients to the heart muscle, causing a heart attack, heart failure or cardiac arrhythmia. A possible treatment for CHD is the implantation of a stent, which will widen the narrowed vessel. This widening will restore the oxygen and nutrient supply. Being a minimally invasive technique, 298.557 stent implantations were performed in Germany in 2020, accounting for about 88 % of all related interventions. According to the German fee-per-case system, the costs for a single stent range from €54.80 to €1,189.69. Combining the number of implantations with the costs per stent, the resulting financial burden on German health services is imminent. One reason for these high costs is the absence of an automated inspection and correction system during stent production using a maypole braider. Following this argumentation, an automated system for detecting and correcting geometry errors in stents during their production is desirable.To detect errors in a stent during the production process, it is necessary to measure its geometry. This requires knowledge about its position within the image. Since the stent is braided using a maypole braider, locating the stent is equivalent to locating the mandrel. This paper proposes a concept to measure the mandrels’ position during production based on camera images. It differentiates between and handles cylindrical stents as well as curved ones. Also, it compensates for the movements of both the camera and the mandrel in the x and z planes. Movements in the y-plane can be neglected. Additionally, methods to measure a cylindrical mandrel are evaluated, including Canny Edge Detection, the Hough transform, k-means clustering, and a watershed algorithm. In addition, four convolutional neural networks and two object detection models were tested. The lowest mean squared error of 9.04 was achieved using the YoloV10 object detector (mean MSE: 9.04, median MSE: 9.57, mean MAE: 11.5, median MAE: 7.65, and execution time per image: 846.9 ms). The fastest approach with an execution time of 53.06 ms is based on the Canny operator to detect lines as well as a threshold on the images’ histogram to find the position of the mandrels’ borders (mean MSE: 55.89, median MSE: 21.0, mean MAE: 70.51, median MAE: 16.12, and execution time per image: 53 ms). The images being used to train, evaluate, and test all methods were recorded using a maypole braider in an industrial environment.Parts of this work have been developed in the project Stents4Tomorrow. Stents4Tomorrow (reference number: 02P18C022) is partly funded by the German Federal Ministry of Education and Research (BMBF) within the research program ProMed.
Yuna Haas, Eric Sax
Open Access
Article
Conference Proceedings
Design of a video game adapted to the study of motivation in young people with emotional disorders
For several years now, the impact that mental illnesses such as depression, anxiety or eating disorders have on society has become more evident, and how they are affecting an increasing number of young people. Measuring variables related to motivation, in order to perceive mood disorders, is important for monitoring the disease. There are two types of motivation. On the one hand, intrinsic motivation, which arises from within the individual, awakening interest in carrying out a task without expecting external rewards. On the other hand, extrinsic motivation, which is stimulated by the search for results or external rewards. Regarding the latter, video games and the fields of psychology and psychiatry have been strengthening their ties in recent years, proving the usefulness of this digital product in the study of patients with some type of emotional problem. This work proposes the design specifications of a video game that can measure extrinsic motivation as a psychological variable that helps determine possible behavioral problems in the person, and allows monitoring during treatment. A simple and accessible design is proposed to facilitate its use and make it user-friendly, through an implementation for mobile platforms on the Android operating system. The specifications of the video game are conditioned by the complexity of establishing a balanced workreward system, equipped with simple game mechanics that do not require specific skills on the part of the user. The result is a feasible design that could allow psychologists and psychiatrists to follow their youngest patients, and an attractive tool to promote the collection of information.
Miguel De Andrés Herrero, Victoria Lopez, Matilde Santos, Diego Urgelés, Manuel Faraco Favieres
Open Access
Article
Conference Proceedings
AI-Generated Clinical Case Studies In Physiotherapy: Enhancing Education Through Integrated Artificial Intelligence
Building on previous research, this study advances the use of AI-generated clinical case studies, specifically targeting various domains in physiotherapy. This work involved the creation of ten detailed clinical cases using a large language model (LLM) known as OpenAI's ChatGPT (Generative Pretrained Transformer; OpenAI). Each case was carefully designed to simulate real-world scenarios that physiotherapy students might encounter in their professional practice, covering diverse areas such as orthopedics, neurology, cardiopulmonary, and geriatrics. To ensure the generated cases adhered to high standards of educational quality, the prompts provided to ChatGPT were meticulously reformulated following established guidelines from the literature. Moreover, a classical physiotherapy textbook was employed as a reference for formatting and structuring the clinical reports. Preliminary feedback from physiotherapy educators and students suggests that the AI-generated content effectively mimics human-authored clinical cases, providing a valuable tool for enhancing clinical reasoning skills and bridging the gap between theoretical knowledge and practical application. Future research will focus on refining the AI prompts further and expanding the range of clinical scenarios to cover a broader spectrum of physiotherapy practice.
Manuela Couto De Azevedo, Mateus Toledo Gomes, Cassiano Portela Da Fonseca, Letícia Lima Pires, Christiano Bittencourt Machado
Open Access
Article
Conference Proceedings
Mixed Reality as a tool for enhancing precision in surgery planning
Mixed reality (MR) is an emerging technology that combines features of augmented reality (AR) and virtual reality (VR) by overlaying virtual elements onto a natural environment. This fusion of the real with the digital allows users to interact naturally and intuitively with the various aspects, making MR a valuable tool for its application in different fields, including the clinical field. This work aims to present a working methodology for the application of Mixed Reality in different surgical specialities, showcasing scenarios generated for its use in the surgical fields of trauma and vascular surgery. By superimposing images and using 3D anatomical models over the surgeon's field of vision, the aim is to support the surgeon's movement guidance in complex procedures or areas that are difficult to visualise. To achieve this, the development process of the MR scenarios is detailed: firstly, the work of the medical image and extraction of the anatomical models using Materialise’s Mimics software, followed by the importation into the Unity engine for the design and positioning of virtual elements to be displayed, and finally the visualisation and design of the interactions with the digital environment through the use of different devices (tablets, smartphones, headsets). In addition to combining 3D anatomical models with information from the DICOM file of the medical image, the working methodology presented also details the work carried out for the positioning of guiding elements, such as vectors, angles, trajectories or other geometric elements that aid in guiding the surgeon’s movements, using devices that allow professionals to keep their hands free. All of this aims to show a possible use of Mixed Reality by offering greater immersion through anatomical models that faithfully represent the patient's anatomy and the ability to interact with them in real-time, making it technological support for the clinician's training for diagnosis and surgical planning, improving anatomical understanding in complex cases. The potential and application of MR in various surgical fields, especially in surgical planning, could significantly transform medical practice by allowing greater personalization of interventions, optimising precision for better clinical outcomes, and saving time in the operating room by increasing the efficiency and safety of the surgical procedure.
Alejandra Gomez De Cadiz, Ainhoa Rodríguez- De Luis, Iván Martín González, María Carmen Juan Lizandra, Cristina Herrera Ligero
Open Access
Article
Conference Proceedings
Specific conditions of home use medical devices: A study on CPAP devices
Recent developments in medical and healthcare technologies have resulted in longer lifeexpectancy, along with an increase in chronic conditions and related medical costs. Thistrend is being mitigated through the integration of empowering technologies into everydaylife, which improve tracking and effectiveness in preventive healthcare, thereby enhancingquality of life and lowering expenses. Digital technologies, especially smart wearables, havebecome widely adopted, addressing a variety of user needs beyond just health andwellness. Their multifunctionality facilitates their incorporation into daily routines, allowingusers to monitor exercises, steps, calories burned, sleep patterns, heart rate, bloodpressure, diet, and hydration—key components for preventive health. Conversely, whilemedical devices are becoming more compact and user-friendly for non-professionals, theirdevelopment is progressing more slowly and conservatively than digital products. Thedifferences between these markets stem from variations in regulations, research anddevelopment capabilities, financial limitations, and legal obligations. Nevertheless, medicaltechnologies in healthcare are increasingly integrating with consumer technologies,highlighting the need for product designers to be aware of the specific requirements formedical devices. This paper reports on findings from a field study with 30 users ofContinuous Positive Airway Pressure (CPAP) devices for Obstructive Sleep Apnea (OSA),selected as a successful home-use medical device for a common chronic condition. Thefindings are categorized by factors affecting product choice, ease of use, and userperceptions of CPAP therapy.
Mehmet Erçin Okursoy, Naz Börekçi
Open Access
Article
Conference Proceedings
Connecting Image and Reality: The Role of 3D Printing in Surgical Planning
Training the healthcare service members of tomorrow or assisting in the surgical planning of today's interventions is challenging, as the current CT/MRI images require an experienced and trained eye to interpret the intricate anatomy of the human body correctly. Although the CT/MRI scans are highly detailed and widely implemented, they cannot address the level of understanding a trainee or expert could gain through the haptic perception or tactile learning experience offered by 3D printing. 3D printing enables hands-on understanding through highly detailed, patient-specific anatomical models from medical imaging, providing surgeons with an interactive way to visualise and practice complex procedures before entering the operating room. In surgical education, 3D-printed models, especially those simulating the texture of actual tissues or bones, provide essential tactile feedback that contributes to realistic training scenarios. This enhances surgical precision and reduces the likelihood of complications during surgery. Moreover, these models allow trainees to practice accurate replicas of human organs, improving their skills in a risk-free environment. Therefore, this paper presents some case studies focusing on 3D printing in surgical planning that can effectively highlight the technology's current advantages and limitations. The models, fabricated with flexible and radiotransparent materials, allow surgeons to simulate surgical scenarios, improving preoperative planning, instrument handling, and decision-making. Subjective validation by specialists demonstrated that these models accurately replicate the physical properties of the target anatomy, aiding in better visualisation and procedural practice. However, limitations were observed in current methodologies, such as challenges related to material elasticity, the durability of 3D-printed models, and difficulties in navigating tortuous anatomical paths during simulations. Further, there is room for improvement in the accuracy of specific anatomical features and the interaction with surgical instruments, where minor irregularities hinder smooth operation. According to the findings, future work should focus on refining the materials used in 3D printing to enhance the robustness and realism of the models, particularly in complex anatomical structures. Additionally, incorporating real-time imaging data with 3D printing could further improve the adaptability of these models for preoperative simulations. Expanding these technologies beyond their current use in vascular surgery could revolutionise other surgical fields, offering customised, patient-specific planning tools across various medical disciplines.
Alejandra Gomez De Cadiz, Adrian Morales-casas, Claudia Marissa Aguirre Ramón, Ignacio Espíritu-garcía-molina, Cristina Herrera Ligero
Open Access
Article
Conference Proceedings
Exploring the Nexus Between Physical and Mental Health: Assessing Stress Through Heart Rate Variability
Physiological signals such as electrocardiography (ECG) have traditionally been associated with assessments of physical health. Mental health, meanwhile, often relies on subjective measures like self-report questionnaires and clinical interviews. Yet, the autonomic nervous system and the central nervous system are intrinsically linked to mental states like stress. While previous studies have associated certain parameters with stress, variability in findings and protocols, as well as limited exploration of some parameters, especially non-linear measures, highlight the need for further research.This paper examines how parameters extracted from ECG can be used to study acute stress levels. By analysing ECG data, we seek to identify patterns and correlations that reflect stress responses in individuals, potentially serving as reliable, objective markers.We conducted experiments exposing participants to controlled stress stimuli. Each session included a baseline measurement at rest, exposure to a stressor (the cold-pressor test), and a recovery phase. Continuous ECG recordingwere obtained, and a comprehensive range of Heart Rate Variability (HRV) parameters, encompassing time-domain, frequency-domain, geometrical and non-linear measures, was extracted to assess autonomic balance.Preliminary results demonstrate that certain HRV parameters change characteristically during acute stress exposure, indicating increased sympathetic activity (e.g., reduced mean and median NN intervals reflecting a shift toward higher heart rate). These physiological changes tended to normalise during recovery, underscoring the dynamic nature of the acute stress response. However, elevated parasympathetic-like measures (e.g., elevated sdnn, rmssd and pnni_50) during stress suggest that conscious or subconscious respiratory modulation can influence HRV indices. Moreover, some parameters revealed age-related differences that highlight how autonomic adaptability may diminish as individuals advance in age.These findings suggest that ECG-derived HRV parameters can serve as reliable, objective markers of acute stress. Understanding the physiological foundations of stress and the factors that modulate it, such as breathing patterns and age, may inform the development of non-invasive monitoring tools and interventions. This, in turn, could lead to more comprehensive evaluations of stress-related conditions like anxiety or depression, and support personalised strategies to enhance mental health and wellbeing.
Ana De La Torre - García, Úrsula Martínez - Iranzo, Gema Prats Boluda, Miguel Ángel Serrano Rosa, José Luis Martínez De Juan, Cristina Herrera Ligero
Open Access
Article
Conference Proceedings
Bidirectional Long Short-Term Memory (Bi-LSTM) with Convolutional Neural Networks (cNN) Based Obstructive Sleep Apnea Detection Using ECG Signals
Recent advances in artificial intelligence (AI) have significantly impacted various fields, including finance, manufacturing, and bio-signal analysis. Obstructive sleep apnea (OSA) is a common disorder characterized by recurrent episodes of partial or complete airway obstruction during sleep. Traditionally, it is diagnosed using polysomnography (PSG), which involves overnight monitoring of various physiological signals, including ECG. This process can be both time-consuming and uncomfortable for patients. Therefore, efficient and accurate OSA detection through bio-signal analysis is essential.ECG signals represent time-series data that exhibit high temporal dependency and non-stationary characteristics, meaning their features change dynamically over time. To address this complexity, we propose a hybrid model that integrates Bidirectional Long Short-Term Memory (Bi-LSTM) with Convolutional Neural Networks (cNN) to detect OSA events from ECG signals. This model processes key features extracted from cNN layers, capturing both past and future contexts simultaneously in the Bi-LSTM sub-module. This approach enhances the detection of subtle differences in temporal dependencies.For our study, we sampled 72 ECG signals, considering gender and severity levels from the publicly available PSG-Audio dataset, and segmented them into 30-second intervals. Following a filtering process, we applied dimensionality reduction using the EMD algorithm based on prior results. Our experiments demonstrated that the proposed model outperformed the reference model from a previous study, achieving an accuracy of 88.68%, sensitivity of 86.94%, specificity of 90.38%, and an F1 score of 0.895.These results highlight the effectiveness of the proposed model in detecting OSA, which could enhance diagnostic accuracy through advanced bio-signal analysis.
Yinxian He, Amy M Kwon, Kyungtae Kang
Open Access
Article
Conference Proceedings
Advances in Pulse Rate Variability (PRV) Monitoring with rPPG: Insights from Unsupervised Methods
Remote photoplethysmography (rPPG) emerges as a non-invasive alternative for pulse and pulse rate variability (PRV) measurement, eliminating the need for direct skin contact. This approach is particularly suitable for applications where wearable sensors are impractical, such as the automotive sector, where accurate and robust PRV monitoring is essential to enhance driver safety by providing real-time insights. This study evaluates the accuracy and robustness of rPPG signal extraction using the Freyja/IBV-Dataset, which comprises 73 participants with diverse intrinsic factors, such as age, body mass index (BMI), and skin phototypes, as well as extrinsic conditions, including varying lighting and distances. Seven rPPG algorithms (GREEN, POS, CHROM, ICA, FastICA, PVB, and LGI), selected for their established efficacy in handling environmental variations, were compared against electrocardiogram (ECG) as the reference standard. The findings reveal that the mean normal-to-normal interval (meanNNI) demonstrates the greatest robustness when estimated using ICA and FastICA, which achieved consistently low mean absolute errors (MAE) even under challenging conditions such as reduced lighting and increased distance. However, the estimation of the standard deviation of normal-to-normal intervals (SDNN), a parameter sensitive to noise and environmental conditions, showed higher errors. These discrepancies are attributed to intrinsic differences between mechanical (rPPG) and electrical (ECG) signals, disparities in sampling frequencies between devices, and environmental influences. This study highlights the need to optimize rPPG signal extraction and processing techniques to improve the accuracy and robustness of PRV parameter estimation. Future research should focus on increasing the image sampling rate, exploring PPG measurements closer to the face, and employing advanced artificial intelligence (AI) methods to adapt algorithms for challenging conditions, such as diverse skin phototypes and complex environmental settings.
Marc Escrig Villalonga, Úrsula Martínez - Iranzo, Cristina Herrera Ligero, Alberto Albiol Colomer, Ana De La Torre - García
Open Access
Article
Conference Proceedings
Comparison of Two Smartwatch-Based Approaches for Real-Time Activity Classification in the Care Context
This work presents a comparative analysis of two approaches to the classification of human activities. These approaches offer the potential for the automation of caregivers’ documentation of activities performed by patients, which could facilitate improvements in the treatment of diseases.Both approaches are based on integrating smartwatch technology with a neural network, enabling the real-time classification of activities. The integration of sensor technology into a patient's daily life via a smartwatch can facilitate the treatment of their disease and provide information about disease progression and disease-related changes. The smartwatch offers the ability to sample accelerometer, gyroscope, gravity, and position data at a frequency of 20 hertz (Hz), which is then transmitted to a recurrent neural network called Long Short-Term Memory (LSTM) for real-time classification.The implemented real-time classification provides immediate and precise indications regarding the temporal occurrence and probability of performing one of the defined activities. The primary distinction between the two classification methodologies pertains to the implementation of the LSTM network. One approach involves the operation of the Long Short-Term Memory (LSTM) neural network on a server, while the other employs direct operation on the smartwatch. This distinction yields notable contrasts in performance and functionality.The findings indicate that the server-based smartwatch model exhibits superior classification accuracy and more comprehensive functionalities, whereas the model implemented on the smartwatch demonstrates enhanced flexibility.Based on data obtained from smartwatch sensors, activities that are very similar can be classified flexibly, irrespective of location and in real-time. The insights gained about patients' motor skills provide the potential for nursing staff to be supported in the care of their patients with neurological diseases.
Sergio Staab, Nadia Günter, Ludger Martin, Johannes Luderschmidt
Open Access
Article
Conference Proceedings
Algorithmic Journalism and Ideological Polarization: An Experimental Work Around ChatGPT and the Production of Politically Oriented Information
Artificial intelligence (AI) is one of the emerging technologies that is developing with ever greater intensity and in an ever-increasing number of domains, often overturning the features of these domains. In the domain of journalism, generative AI has become a tool used to write texts and articles with potential implications on ethics and on the issue of transparency (Diakopoulos, Koliska, 2017) together with a possible reconfiguration of the perimeter and of the foundations of making information, with the oscillation between different options and positions (Schapals, Porlezza, 2020). The scientific literature on the subject is expanding and the discussion among decision makers is becoming more intense (most recently with the AI Act of the European Parliament). Actually, journalism has represented one of the professions most characterized by the relationship with technology, and most significantly modified by it in its production processes and business models (Pavlik, 2000). Natural Language Generation (NLG) software based on AI algorithms has contributed particularly significantly to spreading the perception of a “paradigm shift” among insiders and information operators. In fact, various theses and arguments have been developed around such software, which can include political-ideological and ethical-philosophical evaluations. Therefore, in this work we propose an experimental work, based on a mixed-method methodology, which starts from the following research question: is there a prevalent political orientation of AI-based generative software? Or, better yet, can we arrive, on certain topics, to verify a propensity of the machine to generate “polarized” articles classifiable along the right-left axis in relation to the subject of the discussion? And, therefore, can “automated” journalism also lead to the necessary production of articles with a predefined orientation and thesis? To verify this research hypothesis, we intend to have an AI-based NLG platform (e.g., ChatGPT) generate some articles on three selected topics with reference to the most recent Italian and international political debate, also investigating the effect of the cheat sheet indications on the polarization of the articles: a. immigration management policies; b. minimum wage; c. adoption of children by homoparental couples. That is, topics usually treated in a highly polarized way in the contemporary transitional post-public sphere (Schlesinger, 2020), so as to empirically test whether the automation in the production of articles is free from political evaluations or whether it turns out to be influenced by a dominant (or mostly distributed) political orientation/thought within the vast dataset of document sources that form the basis of the AI system's training.BibliographyDiakopoulos, Nicholas, and Michael Koliska. 2017. Algorithmic transparency in the news media. Digital Journalism, 5: 809–28. Pavlik, John. 2000. The impact of technology on journalism. Journalism Studies 1: 229–37. Schapals, Aljosha Karim, and Colin Porlezza. 2020. Assistance or resistance? Evaluating the intersection of automated journalism and journalistic role conceptions. Media and Communication 8: 16–26. Schlesinger, P. (2020). After the post-public sphere?. Media Culture and Society, 42(7-8), pp. 1545-1563.
Claudio Loconsole, Massimiliano Panarari
Open Access
Article
Conference Proceedings
Design of a programming workshop to update gender bias in engineering among adults
One of the SDGs where progress has been notably slow in Japan is "Achieving Gender Equality" [1]. In this context, a 2021 survey by OECD [2] pointed out that the percentage of female tertiary entrants into the fields of engineering, manufacturing, and construction in Japan was 16%, the lowest among OECD countries. A survey conducted in Japan also revealed that mechanical engineering and computer science are often perceived as unsuitable occupations for women [3]. One proposed initiative to tackle this issue is encouraging female teenage students to participate in programming workshops. So far, programming materials designed specifically for girls have been proposed [4]. However, in addition to these initiatives, it is also necessary to engage with the older generations, who can significantly impact their career choices. The purpose of this study was to design a new programming workshop attracting female adults who are unfamiliar with programming and to evaluate its effectiveness in reducing the stereotype. The workshop was designed to create sensor-activated message cards that emit music using a programming kit. Women tend not to get interested in science fields due to the difficulty in imagining practical applications [5]. Similarly, there is a possibility that providing a new point of view by connecting programming with subjects that are familiar with daily life, especially for women could be a solution. In this study, message cards, which were often used for presents were focused on. Through this workshop, participants can experience how programming enables them to add new and unique features to ordinary message cards. Additionally, while message cards that play sound when opened have become popular in shops recently, this workshop offers them the experience of creating such innovative items by themselves. In addition to gaining such new knowledge, this workshop includes the process of decorating cards, aiming to enhance the engagement of female adults. Through this workshop, it was hypothesized that participants could feel programming more accessible and achievable for them. As for the preliminary design study, several female adults participated, and their perceptions of programming were evaluated through quantitative surveys and open-ended descriptions. The beginner-friendliness of the workshop design was also asked. This paper reports each of their impressions and suggests approaches to reduce the gender gap. [1] Sachs, J. D., et al. (2024) The SDGs and the UN Summit of the Future. Sustainable Development Report 2024. Dublin: Dublin University Press.[2] Organisation for Economic Co-operation and Development (2021) "Japan", in Education at a Glance 2021: OECD Indicators. OECD Publishing.[3] Ikkatai, Y., et al. (2020) Gender-biased public perception of STEM fields, focusing on the influence of egalitarian attitudes toward gender roles. Journal of Science Communication, 19(1). [4] Basiglio, S., et al. (2024) The Impact of the ‘Coding Girls’ Program on High School Students’ Skills, Awareness and Aspirations. CESifo Economic Studies.[5] Smail, B. (1984). Girl-friendly Science: Avoiding Sex Bias in the Curriculum. Longman.
Reika Abe, Kimi Ueda, Hirotake Ishii, Hiroshi Shimoda
Open Access
Article
Conference Proceedings
Hospital Kitchen Ergonomics: Analysis of Manual Operations in a Hospital Kitchen Using Jack Software
Kitchens are considered risky workplaces. Musculoskeletal disorders (MSDs) are one of the leading causes of occupational illnesses that occur due to performing specific forceful kitchen tasks. The study aims to improve kitchen ergonomics by analyzing and redesigning manual operations in a local hospital kitchen in Kuwait using JACK software. A questionnaire was distributed to identify the workers' complaints. Tasks causing pain in the affected areas were investigated. After developing the digital human model, different performance metrics from JACK software tool analysis, such as Rapid Upper Limb Assessment (RULA), Ovako Working Posture Analysis (OWAS), and Lower Back Analysis (LBA) were used to analyze the tasks studied.The findings revealed high initial ergonomic risks, with tasks such as vegetable washing consistently scoring 6–7 on the RULA scale, indicating urgent intervention needs. Post-intervention, risk levels were significantly reduced, with RULA scores dropping to 3–4, particularly for the vegetable washing task, which benefitted from tailored ergonomic modifications like food-washing racks and leg supports. Lower back forces were also notably reduced, especially for lighter workers, highlighting the differential impact of task redesign on anthropometric variations. OWAS scores remained stable, reflecting moderate postural risks throughout. This study underscores the effectiveness of tailored ergonomic interventions in reducing MSD risks and improving workplace safety. The proposed methodology, integrating advanced digital modeling and performance metrics, offers a systematic approach for addressing ergonomic challenges in hospital kitchens and other industrial settings.
Lawrence Al-fandi, Kawther Jamal, Fatma Jamal, Ruqaya Safar, Ghadeer Al-neama, Fatma Al-sarraf
Open Access
Article
Conference Proceedings
Traditional vs. Personalised Teaching: An experimental study on AI's role in education
One of the major challenges in education is ensuring student success for most learners through quality teaching. However, creating an inclusive education system that addresses student heterogeneity is difficult when applying the same curricular standard.In most education systems, standardized teaching is the dominant pedagogical approach, characterized by minimal differentiation and a repetitive program applied over years (Perrenoud, 1978). This homogeneity, although practical, often neglects individual differences, limiting each student’s learning potential. "All students learn in different ways and more effectively when learning circumstances align with their preferred approach" (Hockett, 2018).Luckin and Holmes (2016) argue that individual human tutoring can be the most effective approach to teaching and learning. Unfortunately, it’s unsustainable: it’s not possible to provide one teacher per student.Therefore, how can teachers provide personalized teaching? Could AI be a tool for personalization? And could it also contribute to a positive shift in the role of the teacher, positioning them as a cornerstone of teaching?Chen, Xie et al. (2020) observe that studies on Artificial Intelligence in Education (AIEd) have increased significantly, especially regarding content personalization. Maghsudi et al. (2021) state that the goal of personalized teaching is to achieve effective knowledge acquisition aligned with student's strengths and overcome their weaknesses to reach the desired objective. Through the inclusion of AI in an educational platform, we can accurately acquire the student’s characteristics. This inclusion is achieved through observing the student’s history and past experiences, identifying patterns and similarities, and analyzing large volumes of data. The recommendation of appropriate content, a long-term curriculum, and the creation of accurate performance assessments can become a reality in education systems. This improves learning but also predicts areas where the student may struggle, providing personalized and real-time support.Thus, AIEd could fill current gaps in the education system, enabling teachers to create personalized learning for each student’s profile. As Luckin and Holmes (2016) suggest, AIEd could take over bureaucratic tasks currently assigned to teachers allowing them more time for creative and inherently human activities that are essential to elevating the quality of the learning process.This study aims to validate the effectiveness of these possibilities through a User Research method. To this end, and with a sample of elementary school students, a comparative performance study was conducted using traditional teaching methods and an AI-generated personalized teaching method.In the traditional teaching method, the same exercise was presented to all students, while in the AI-generated teaching method, a personalized exercise was presented according to each student's needs, characteristics, and learning level, with the same exercise being presented in completely different ways to each student.The results were analyzed qualitatively to assess the effectiveness of the AI-generated personalized teaching method.The study concluded that AIEd better aligns with students' educational needs, improving their development. Although the tests reveal promising developments for the future, technological and ethical barriers remain that must be addressed to ensure this approach is sustainable and inclusive across different educational contexts.References: Perrenoud, P. (1978). Das diferenças culturais às desigualdades escolares: a avaliação e a norma num ensino indiferenciado. In Allal, L., Cardinet, J., Perrenoud, P. (1986). A avaliação formativa num ensino diferenciado. Coimbra, Livraria Almedina, pp. 27-73.Hockett, J. A. (2018). Differentiation Strategies and Examples: Grades 6-12. Tennessee Department of Education. Alexandria, VA: ASCD.Luckin et al. (2016). Intelligence Unleashed: An argument for AI in Education. Pearson Education.Maghsudi, S., Lan, A., Xu, J.,e van Der Schaar, M. (2021). Personalized education in the artificial intelligence era: what to expect next. IEEE Signal Processing Magazine, 38(3), 37-50.Chen, X., Xie, H., Zou, D., e Hwang, G. J. (2020). Application and theory gaps during the rise of artificial intelligence in education. Computers and Education: Artificial Intelligence.
Ana Marques, Maria Inês Pires, Jo Dias
Open Access
Article
Conference Proceedings