Artificial Intelligence and Social Computing

Editors: Tareq Z. Ahram, Jay Kalra, Waldemar Karwowski
Topics: Artificial Intelligence & Computing
Publication Date: 2025
ISBN: 978-1-964867-39-7
DOI: 10.54941/ahfe1005974
Articles
Data-Driven Insights into Diabetes-Related Hospital Readmissions in the United States: Trends and Predictors
Hospital readmissions is a key metric of evaluating healthcare quality, efficiency of care coordination, discharge planning, and follow-up care. Readmissions, defined as a patient's re-hospitalization within a specified period, such as 30 days, are frequently associated with incomplete treatments, medication errors, or inadequate follow-up care. Diabetes-related hospitalizations which account for a significant percentage of these readmissions in the United States is a critical and rising concern to healthcare authorities and the number is increasing year by year. From 2016 to 2019, diabetes-related 30-day readmission rates consistently surpassed all-cause readmissions (readmissions due to any medical condition), averaging approximately 19.5% compared to 13.9%. Diabetes-related readmissions incurred substantial financial and emotional costs, with aggregate re-hospitalization costs rising from $11.23 billion in 2016 to $14.03 billion in 2019. These financial and emotional burdens on patients and healthcare systems highlight the importance of targeted interventions to mitigate risks associated with readmissions. With the growing availability of large-scale healthcare data repositories and computing resources, it is possible to address critical challenges involved in hospital readmissions using predictive analytics.This study utilizes the Healthcare Cost and Utilization Project (HCUP) Nationwide Readmissions Database (NRD) from 2016 to 2019 to develop machine learning (ML) models for predicting 30-day readmissions for diabetic patients. Using diverse attributes/factors such as patient demographics, hospital characteristics, payer type, and discharge disposition, this research explores how predictive modeling approach based on healthcare data repositories can generate actionable insights to improve diabetes-related patient outcomes. Independent predictors identified include payer type, disposition type, and median household income demonstrating significant predictive values across ML algorithms. Ensemble approaches such as Boosted Trees and Bootstrap Forest outperformed traditional methods, achieving Area Under Receiver Operating Characteristics (AUROC) scores of 0.7417 and 0.6978, respectively, while maintaining low misclassification rates (31.4% for Boosted Trees). These results highlight the potential of ML models trained on large-scale datasets to optimize care coordination. The findings of this study emphasize the importance of socioeconomic and institutional factors in predicting diabetes-related readmissions and the role of data-driven methodologies in advancing healthcare. This study contributes to the broader application of predictive analytics in healthcare offering scalable solutions to lower readmissions using healthcare data repositories. Future directions include the refinement of ML models and comparisons with existing studies to improve predictive accuracy and healthcare delivery for diabetic patients.
Ruchi Kukde, Jaymeen Shah, Aindrila Chakraborty
Open Access
Article
Conference Proceedings
A Sliding-Window Batched Framework: Optimizing Retrieval-Augmented Generation (RAG) for Trustworthy AI under the EU AI Act
This study introduces Sliding-Window Batched RAG (SWB-RAG), a novel framework that optimizes both efficiency and contextual accuracy in retrieval-augmented text generation for lengthy and complex documents in terms of leveraging Trustworthy AI. Building upon foundational RAG research (Lewis et al., 2020) and sliding-window techniques (Beltagy et al., 2020), we conducted a two-phase comparative evaluation. In Phase One, when processing a 144-page legal document, SWB-RAG achieved statistical equivalence to Classic Contextual RAG (CC-RAG) across all RAGAS quality metrics while reducing runtime by 92.7% and costs by 97.9%. In Phase Two, across 56 diverse documents, totaling 5,965 pages, SWB-RAG significantly outperformed Traditional RAG (T-RAG) in context of recall (p < 0.001) and context precision (p = 0.008). The framework's innovation lies in its three-component architecture: a global document summarization to capture overarching themes, a batch processing to optimize computational efficiency, and a sliding-window context enrichment to preserve local contextual richness. Our results—including a Human-in-the-Loop expert evaluation—position SWB-RAG as a scalable, cost-effective solution for especially legal, technical, and scientific document processing, effectively addressing the fundamental efficiency-quality tradeoff that has limited the practical application of RAG systems for complex documents in resource-constrained environments.
Daniel Danter, Heidrun Mühle
Open Access
Article
Conference Proceedings
A Method of Structured Standard Terminology Based on Decoupling Approach
In the context of increasingly frequent interdisciplinary collaboration and global technological exchanges, constructing a ter-minology database is crucial for ensuring consistency in terminology and promoting effective communication. However, a large number of existing standard terminologies are stored in unstructured text files, lacking systematic organization, which hinders efficient construction and maintenance of terminology databases. Therefore, there is an urgent need to develop tools capable of accurately parsing and structuring standard terminology files. Current research primarily adopts rule-based matching and ma-chine learning methods for processing these files. However, these approaches suffer from format sensitivity and high coupling issues. The inconsistency in file formats, coupled with the difficulty for manually written style rules to comprehensively cover all scenarios, leads to poor robustness in parsing tools. Moreover, rule-based tools rely heavily on if-else logical judgments, increasing the coupling between rules and making it challenging to add new rules without causing conflicts, thus complicating maintenance and scalability. To address these issues, we propose a parsing tool tailored for standard terminology files that supports the structuring of "terms and definitions" sections from multiple file formats. The contributions of this paper include: 1) presenting a decoupled file parsing workflow; 2) proposing a set of rule matching and rule processing specific to the domain of standard terminology parsing; 3) developing and deploying an online system. In summary, the proposed parsing tool not only resolves the existing problems of format sensitivity and high coupling but also enhances the efficiency and accuracy of terminology file parsing through innovative decoupling design and domain-specific rule sets, providing strong support for the construction of terminology databases.
Xinyu Cao, Zhengyuan Han, Yi Yang, Liangliang Liu, Pai Peng, Haitao Wang
Open Access
Article
Conference Proceedings
Convo-Based Attitude Analysis of Twitter Big Data: A Case Study on Ukraine-Russia War Dataset
Social media has become a popular platform for studying public perceptions and opinions on important global events like elections, pandemics and international conflicts. Previous studies utilized text mining algorithms to analyze individual messages for references to relevant topics and associated sentiment. Such methods overlook the broader context in which these messages appear and as a result fail to capture often intricate relationships between topics, messages, and their authors. More specifically, these methods do not account for social dynamics among the participants in an online discourse, which typically occurs within a convo (Katsios et al., 2019), a loosely structured cluster of posters interested in a common topic. In this paper, we present a convo-based analysis of a public social media dataset collected over a period of 3 months following the onset of the Ukraine-Russia conflict. In this dataset, we identify the most populous convos, the most influential participants within each, and the topics they discuss. We then demonstrate how the general attitude across these convos shifts over time from a largely pro-Ukraine to an increasingly pro-Russia stance, which we speculate is a result of ongoing influence operations. Our findings provide novel insights into the structure of social media traffic and evolution of attitudes in online populations. This work is a first step towards a more comprehensive framework for social media analysis.
Ning Sa, Ankita Bhaumik, Tomek Strzalkowski
Open Access
Article
Conference Proceedings
Smart Cities: are they really accessible and truly smart?
Population growth associated with urbanization without adequate planning causes several social and infrastructure problems in cities. In contrast, the desire to become “smart” has increasingly become the focus among municipalities, which adopt new technologies often without an in-depth analysis of the consequences and without adequately considering the impact on individuals. In this context, this article addresses accessibility in smart cities, focusing on the integration of ergonomics and urban design. The main objective is to map and analyze, based on the basic principles of ergonomics of the built environment in conjunction with NBR ISO 37120 (ABNT, 2017), with regard to aspects of urban accessibility, whether accessibility indicators are present as established by these guidelines in the city of Curitiba-PR, in order to demonstrate whether there is effective accessibility in the pioneering “smart city” in Brazil and to identify areas of good practices, to assist other cities in this transition, prioritizing accessibility for all citizens, especially for people with disabilities and low mobility, in line with the Sustainable Development Goals (SDGs). The methodology used combines exploratory and descriptive research, based on a literature review and applied qualitative analysis. The SWOT method helped in the discussion of the data obtained and revealed important strengths in Curitiba-PR, such as its well-structured public transportation system, with adapted buses and accessible terminals, in addition to public policies focused on inclusion, such as the “Accessible Curitiba” program. The city also stands out for its adapted urban infrastructure, which contributes positively to mobility and quality of life. However, the study identified significant weaknesses, such as the inequality in the distribution of accessible infrastructure, especially in peripheral areas, and the lack of adequacy in many private spaces, such as commercial and leisure establishments. Opportunities include expanding the system and using assistive technologies. In addition, the city can expand its accessible cultural and leisure infrastructure, promoting social inclusion. Inclusive public policies can also be strengthened, addressing issues such as employability, education, and health for people with disabilities. On the other hand, the study highlighted threats that can hinder the advancement of accessibility, such as: resistance to cultural and organizational change and economic challenges that can compromise investments in accessible infrastructure. These limitations are especially critical given the need to prioritize projects that promote universal accessibility. The results indicate that the combination of the fundamental principles of ergonomics of the built environment and NBR ISO 37120 (ABNT 2017) are effective tools for assessing and identifying accessibility gaps in smart cities. Its application can guide public policies and investments, promoting the continuous improvement of urban infrastructure and ensuring that cities like Curitiba-PR adapt and advance towards universal and truly smart accessibility. The study concludes that, although Curitiba already demonstrates exemplary practices, there is still room for significant improvements, especially in studies to enable a balance between central and peripheral areas. However, it is already possible to map good practices in six areas, which serve as indicators, based on the EAC and NBR principles present in Curitiba, which demonstrate that it is aligned with accessibility and seeks to be “smart” for all.
Larissa Batista, Ana Carolina Lacorte De Assis, Michelle Nascimento Costa, José Alberto Barroso Castañon
Open Access
Article
Conference Proceedings
AI Optimization of Resolution Strategy in Utility Billing and Revenue Assurance
Sustainable profitability for utility companies hinges on the reliability of their billing and revenue collection processes. While the majority of billing operations are efficiently managed through Robotic Process Automation (RPA), there remains a segment that eludes automation and will be delayed. This portion of the billing requires manual intervention to complete the billing process. The timely resolution of these bills is especially important for SOUTHERN CALIFORNIA EDISON since they might be subject to Tariff Rule 17 and result in permanent lost revenue. Unresolved bills also affect customer satisfaction adversely. Ensuring that these manual processes are handled promptly and accurately is crucial in maintaining the financial health of the company and fostering customer trust.Efficiently addressing these challenges can enhance operational efficiency and support the long-term growth of utility companies as well as excellence and continuous improvement. In this study, we explored the delayed bills accounts to identify patterns and trends. We combined our findings with machine learning models, such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN) model, to enhance the process of addressing these delayed bills. This method selectively targeted accounts for a more efficient resolution, reduced lost revenue and brought in greater profitability. Moreover, we expanded this analysis by utilizing predictive models to detect future accounts that are likely to encounter repeated issues. This proactive approach contrasts with the current reactive measures, providing opportunities for improving the efficiency and effectiveness of bill resolution.
Faraz Ahangar, Zining Yang, Lauren Huang
Open Access
Article
Conference Proceedings
Behavioural Intentions of Natural Farming Farmers to Adopt Digital Platforms for Purchasing Inputs: A Structural Equation Modeling-Based Multi-Group Analysis
Natural Farming (NF) is a non-chemical agricultural practice that has gained traction in India since 2016. However, its expansion remains limited due to various challenges. This research investigates the determinants affecting Natural Farming (NF) farmers intention to use digital platforms for purchasing agricultural inputs based on gender by employing an extended framework of the Unified Theory of Acceptance and Use of Technology (UTAUT) with Performance Expectancy (PE), Effort Expectancy (EE), Facilitating Conditions (FC), Social Influence (SI), Personal Innovativeness (PI), Perceived Cost (PC), and Perceived Risk (PR) as constructs. 795 valid responses were collected from the NF farmers in the state of Andhra Pradesh, India, and analysed using Measurement Invariance of Composite Models (MICOM) and Partial Least Squares-Structural Equation-based Multi-Group Analysis (MGA). The MICOM procedure confirmed partial measurement invariance, allowing for MGA based on gender. Results indicate that PE and PI significantly impact adoption for both genders, while FC influences only males. These findings highlight the need for gender-specific digital adoption strategies, emphasizing performance benefits, innovation readiness, and access to technological services to enhance digital adoption among NF farmers.
Aravind Kumar Saride, Mrigank Sharad
Open Access
Article
Conference Proceedings
AIToys: A conceptual definition and future research agenda
This paper introduces the conceptual definition of AIToys, which expands on IoToys to incorporate AI capabilities. AIToys are envisioned as life-long play partners with life-wide implications in play across leisure, learning, and work life. They range from educational robots to anthropomorphized or zoomorphized social and conversational companions, exemplifying the growing robotification of toy play across generations. We explore the concept of AIToys through fictional stories, theoretical perspectives, and toy industry offerings, representing the recent evolution of IoToys, namely AIToys. These toys can learn from our behavior and adapt to how we interact, and each has a persuasion strategy to provoke emotional responses. Our research aims to define the characteristics of AIToys, identify current challenges demanding more research, and propose development directions for sustainable and ethically responsible AIToy design.
Katriina Heljakka, Pirita Ihamäki
Open Access
Article
Conference Proceedings
FITMag: A Framework for Generating Fashion Journalism Using Multimodal LLMs, Social Media Influence, and Graph RAG
As generative artificial intelligence (AI) reshapes the landscape of media and communication, its integration with social media opens new possibilities for human-centered content creation. In the field of fashion journalism, which relies heavily on style, nuance, and visual culture, we present FITMag, a framework that combines multimodal large language models (LLMs) with real-time social media influence and graph data to generate fashion articles approaching professional quality.FITMag builds on the FITNet and FITViz datasets, which identify fashion influencer subgraphs on Twitter. It uses multimodal inputs including influencer metadata, retweets, mentions, hashtag trends (such as #NYFW and #sustainability), and image content to create structured prompts for both text and image generation. Leading LLMs such as ChatGPT, Claude, DeepSeek, and LLaMA are paired with Stable Diffusion to generate content in three primary formats: event-driven articles, niche community pieces, and trend-based narratives. Graph Retrieval-Augmented Generation (Graph RAG) is used to enhance contextual alignment by connecting influencer activity with fashion discourse.To assess FITMag’s effectiveness, we conducted a human-centered study with 15 fashion professionals including editors, stylists, bloggers, and researchers. Participants evaluated 52 fashion articles using 5-point Likert scales across three dimensions: authenticity, coherence, and style. They also completed a blind identification task to determine whether each article was human-written or generated by AI. Quantitative results show that GPT-4o with FITNet data achieved the highest overall performance among AI models, closely matching human-written content in stylistic quality. Participants frequently misclassified AI-generated text as human-written, especially when produced by GPT-4o and Claude, suggesting strong perceived realism. However, vision and language alignment remained a limitation. Participants observed that AI-generated images sometimes lacked contextual relevance or omitted recognizable influencers due to licensing restrictions.These findings highlight both the capabilities and current limitations of multimodal systems. While AI-generated articles can reach professional-level quality in text, challenges in image and text coherence persist. FITMag contributes to ongoing conversations about AI-assisted journalism by integrating social influence data, generative models, and user-centered evaluation. The research provides insight into how AI can support rather than replace human creativity in fashion media.Ultimately, FITMag serves as a testbed for studying AI and human collaboration in social media contexts. It offers practical tools and theoretical foundations for designing future systems that balance generative power, editorial integrity, and cultural sensitivity across digital platforms.
Jinda Han, Mengyao Guo, Shanghao Li, Kailash Thiyagarajan, Zhinan Cheng, Li Zhao
Open Access
Article
Conference Proceedings
Challenges and Opportunities in E-commerce Distribution Networks in Johannesburg.
While the rapidly expanding e-commerce in Johannesburg, the distribution Networks in e-commerce need an examination of the challenges they may encounter and the opportunities these challenges can provide. Promoting development and improving competitiveness relies on overcoming certain logistical challenges. Significant gaps exist in the present research, a lack of studies concentrated on Johannesburg, and an inadequate understanding of consumer viewpoints concerning e-commerce distribution. This study intends to narrow these gaps by considering significant issues, including inventory accuracy, the impact of infrastructure on costs, and customer switching rates and their consequences on operational efficiency. The study used a mixed-methods approach, combining current quantitative data with qualitative insights from case studies and industry reports. This methodology resulted in significant discoveries: Companies in Johannesburg's e-commerce industry are expressing concerns with last-mile deliveries, inventory management, resource management, and real-time tracking. This underscores a demanding logistics landscape that requires targeted improvements in technology and infrastructure. These findings pertain to all e-commerce platforms that could be utilised to enhance operational efficiency and consumer satisfaction. We can surmount these challenges and capitalise on opportunities in Johannesburg's e-commerce sector to get a competitive advantage and establish a foundation for long-term success.
Matanda Alan Tshinkobo, John Ikome, Ibrahim Idowu
Open Access
Article
Conference Proceedings
Revolutionizing Logistics Management with Blockchain Technology
The logistics industry, a critical component of global trade, faces numerous challenges such as inefficiencies, fraud, and lack of transparency. As digital transformation reshapes industries, blockchain technology has emerged as a potential game-changer in addressing these challenges. By enabling secure, transparent, and immutable record-keeping, blockchain offers unprecedented opportunities to streamline operations, improve traceability, and enhance accountability within the logistics sector. This paper explores the applications of blockchain in logistics management, focusing on its potential to improve supply chain transparency, reduce operational costs, and increase overall efficiency. It also addresses the technical and organizational hurdles to blockchain adoption and highlights ongoing efforts to integrate this technology into the logistics
Ayodeji Dennis Adeitan, Clinton Aigbavboa
Open Access
Article
Conference Proceedings
Interpretable AI-Generated Videos Detection using Deep Learning and Integrated Gradients
The rapid advancements in generative AI have led to text-to-video models creating highly realistic content, raising serious concerns about misinformation spread through synthetic videos. As these AI videos become more convincing, they threaten information integrity across social media, news, and digital communications. Using AI-generated videos, bad actors can now create false narratives, manipulate public opinion, and influence critical processes like elections. This technology's democratization means that sophisticated disinformation campaigns are no longer limited to well-resourced actors, creating an urgent need for reliable detection methods and human-machine cooperation to maintain public trust in visual information across our digital transformation landscape. The accessibility of these tools to a broader audience amplifies the potential for widespread misinformation, making robust detection systems crucial for maintaining social media integrity.Through our research into video generation models, we identified that state-of-the-art systems like diffusion transformers operate on patches of noisy latent spaces. We deliberately mirrored this architecture in our classifier design, enabling it to analyze videos using the same fundamental structural unit generation models used to create them. This architectural alignment allows our system to adapt to emerging generation techniques while maintaining detection efficacy.We designed an explainable video classifier using deep learning and neural networks that detect AI-generated content and show evidence for its decisions. The classifier uses three main parts: a convolutional encoder that turns video frames into latent representations, a patch vectorizer that breaks these representations into analyzable chunks, and a transformer that processes these chunks to make the final decision. This human-centered computing design lets us efficiently process videos while maintaining explainability through Integrated Gradients, which reveal which input parts influenced the model's decisions.We use integrated gradients to show which parts of a video led to the model's decision. This method looks at how the model's decision changes as we move from a blank video to the actual video, showing us which pixels matter most for classification. These pixel-level maps provide clear evidence of why the model thinks a video is AI-generated or real, providing transparency critical for building trust in automated content verification systems.We will test our model on the GenVideo dataset, a comprehensive collection of videos labeled as real or AI-generated from diverse sources, including Stable Diffusion, Sora, Kinetic 400, and MSRVTT. This large-scale data analytics evaluation will check how well it classifies videos and explains its decisions, helping determine if the model can work as a practical tool for machine learning-based content verification, considering that wrong AI classifications could harm content creators' reputations.Our work adds to the growing field of explainable AI in content authentication and shows why we need clear evidence when making high-stakes decisions about video content. Future work will look at detecting hybrid videos (real videos with AI elements added) and making our visual explanations more useful for human decision-makers in content verification. The insights gained will inform the development of more sophisticated detection systems capable of addressing evolving challenges in digital content authenticity.
Joshua Weg, Taehyung Wang, Li Liu
Open Access
Article
Conference Proceedings
Leveraging LLMs to emulate the design processes of different cognitive styles
Cognitive styles, which shape designers’ thinking, problem-solving, and decision-making, influence strategies and preferences in design tasks. In team collaboration, diversity cognitive styles enhance problem-solving efficiency, foster creativity, and improve team performance.The ‘Co-evolution of problem–solution’ model serves as a key theoretical framework for understanding differences in designers’ cognitive styles. Based on this model, designers can be categorized into two cognitive styles: problem-driven and solution-driven. Problem-driven designers prioritize structuring the problem before developing solutions, while solution-driven designers generate solutions when design problems still ill-defined, and then work backward to define the problem. Designers with different expertise and disciplinary backgrounds exhibit distinct cognitive style tendencies. Different cognitive styles also adapt differently to design tasks, excelling in some more than others.As a rapidly advancing technology, large language models (LLMs) have shown considerable potential in the field of design. Their powerful generative capabilities position them as potential collaborators in design teams, emulating different cognitive styles. These emulations aim to bridge cognitive differences among team members, enable designers to leverage their individual strengths, and ultimately produce more feasible and high-quality design solutions.However, previous studies have been limited in leveraging LLMs to directly generate design outcomes based on different cognitive styles, neglecting the emulation of the design process itself. In fact, the evolutionary development between problem and solution spaces better reflects the core differences in cognitive styles. Moreover, communication and collaboration within design teams extend beyond simply exchanging solutions, but span multiple stages of the design process—from problem analysis, idea generation, to evaluation. To better integrate LLMs into design teams, it is necessary to consider the emulation of the design cognition process.To this end, our study, based on the cognitive style taxonomy proposed by Dorst and Cross (2001), explores how LLMs can be used to emulate the design processes of problem-driven and solution-driven designers. We develop a zero-shot chain-of-thought (CoT)-based prompting strategy that enables LLMs to emulate the step-by-step cognitive flow of both design styles. The prompt design is inspired by Jiang et al. (2014) and Chen et al. (2023), who analyzed cognitive differences in conceptual design process using the FBS ontology model. Furthermore, to evaluate the effectiveness of LLMs in emulating cognitive styles, this study establishes a three-dimentional evaluation metrics: static distribution (the proportion and preference of cognitive issues), dynamic transformation (behavioral transition patterns), and the creativity of the design outcomes. Using previous studies identified human design behaviours as a benchmark, we compare the cognitive styles emulated by LLMs under different design constraints against human performance to assess their alignment and differences.The results show that LLM-generated design processes align well with human cognitive styles, effectively emulate static cognitive characteristics. Moreover, enhancing novelty and integrity in solutions and demonstrating superior creativity compared to baseline methods. However, LLMs lack the fully complex nonlinear transitions between problem and solution spaces observed in human designers.This process-based emulation has the potential to enhance the application of LLMs in design teams, enabling them to not only serve as tools for generating solutions but also provide support for collaboration during key stages of the design process. Future research should enhance LLMs' reasoning flexibility through fine-tuning or the GoT approach and explore their impact on human-AI collaboration across diverse design tasks to refine their role in design teams.
Xiyuan Zhang, Jinyu Gu, Hongliang Chen, Shiying Ding, Chunlei Chai, Hao Fan
Open Access
Article
Conference Proceedings
Similarity Calculation of Concepts Based on Feature Distillation
Aiming at the problem that the accuracy of similarity calculation results is not high due to the lack of semantic information in the standard terminology database, this paper proposes a conceptual similarity method based on feature distillation. The method firstly utilizes the FastText model to obtain the word vectors of the text, and then recalculates and weights to get new word vectors using the resources of the standard terminology database, and finally adds the BiLSTM model to further extract the contextual semantic information. The experimental results show that the method effectively integrates domain knowledge, enhances the recognition ability of text semantics, and significantly improves the accuracy of the similarity calculation results between texts in the standard terminology database.
Haitao Wang, Lianghong Lin, Xinyu Cao, Jianfang Zong
Open Access
Article
Conference Proceedings
Far beyond knowledge – How hybrid intelligence is fundamentally changing our work and economy by enhanced innovation processes
Artificial intelligence (AI) – as both a technology and scientific discipline – is bringing about a significant transformation in the realm of work. On one hand, AI systems provide organizations with a multitude of opportunities to enhance efficiency and cost-effectiveness in their processes. On the other hand, organizations are faced with significant challenges when it comes to selecting the appropriate AI technologies and functions for specific use cases, as well as addressing the need for new forms of human-machine interaction (HMI) and collaboration (HMC). The digital transformation of strategic and operative processes, through the introduction of new hybrid forms of HMI and HMC, is an innovative and pioneering development that is still in its early stages. During the presentation, the potentials and limitations as well as the impact of cognitive systems on the business of the future will be presented and the success-critical factors that need to be considered when designing new forms of human-machine collaboration will be highlighted based on a by the author developed methodology – hybrid intelligence will path the way towards sustainable and enhanced innovation processes in the era of AI.
Christian Vocke
Open Access
Article
Conference Proceedings
Web-based Human-centred Explainability of NLP Tasks with Rationale Mapping Theory
Recently, human-generated data has been used to explain machine learning and NLP models. Such methods usually focus only on labelling results with relevant human-generated tags, explicitly identifying objects, actions, or other elements in the output. Therefore, potential explanations only refer to the data elements and the model parts that produce them. The cognitive process applied by the human to perform the task is completely neglected. We claim the latter is essential to provide complete and human-understandable explanations of results, models, and processes. Some existing approaches studied in linguistics, such as rationale mappings, aim to achieve this objective by formalizing tree-based data structures to collect human rationale applied to NLP tasks. This work presents a web-based, human-centred approach to collect rationale mappings for various NLP tasks. Our contribution includes the formalization of the Rationale Mapping theory, the design of the human-computer interaction paradigm implementing the theory, the specification of the data collection process, its implementation as a crowdsourcing web application, and its validation with experimental studies showing its reliability and effectiveness.
Andrea Tocchetti, Valentina Naldi, Marco Brambilla
Open Access
Article
Conference Proceedings
Interdependence Exposes The Limits of Classical Team Science
We have been developing the mathematics of interdependence theory for human-human, human-machine, human-AI and machine-machine teams. We provide a brief update on our progress and the challenges we face. This includes a brief review of the limits of classical team science from three perspectives. Then we discuss the value of interdependence in a team. We also discuss our future plans, including the value of interdependence to a society and to the development of future technology to advance the science of human-machine teams.
William Lawless
Open Access
Article
Conference Proceedings
'Humans and AI based communication and reasoning in complex adversarial domains
It is generally agreed that trust is best conceptualized as a multidimensional psychological attitude involving beliefs and expectations about the trustee’s trustworthiness derived from experience and interactions with the trustee in situations involving uncertainty and risk. It has to do with the notion of a willing exposure to risk and an agent willing to be vulnerable to “the other”. In this paper we explore ideas about credition, the interdisciplinary process of believing, and how communicating agents get to believe each other, how issues of uncertainty enter into the issue of believability, and how belief and consciousness also interplay. The paper also addresses epistemological issues related to reasoning and analytical approaches that integrate multidimensional perspectives (labeled “epistemic pluralism”) for complex adversarial domains such as those involved with modern and future intelligence analysis.
James Llinas
Open Access
Article
Conference Proceedings
Assessment of the Capabilities of Multimodal Large Language Models in Locating and Resolving Ambiguities during Human-Robot Teaming
Human-robot teaming is bound by the quality of communication. This includes maintaining a context among the team. Our work studies the quality of ambiguity identification and resolution by Multimodal Large Language Models (MMLLMs) towards creating a clear context for teams. We developed a benchmark of images with associated ambiguous queries to replicate a teaming context with a human collaborator. We evaluated the performance of several MMLLMs on this benchmark to assess their capabilities in identifying and resolving ambiguities. We created a testing framework in which the MMLLM processes commands accompanied by an image and then evaluates the model's performance in detecting and resolving ambiguities. To create a shared context between our human and robot collaborators, our system provides a picture that captures the viewpoint of the robot as well as a query provided by the human collaborator. The chosen MMLLM processes this information and outputs both portions of the query that are ambiguous as well as suggestions for clarification. A corrected version of the prompt may then be sent to a planner or a system that provides actionable commands. To evaluate each MMLLM's performance, we compare the ambiguities identified by the model with the expected ambiguities from the datasets. We found an 81% accuracy for the top-performing MMLLM.
William Valentine, Michael Wollowski
Open Access
Article
Conference Proceedings
Beyond Explicit Instruction: Enhancing Human-AI Collaboration with Implicit User Feedback
Successful human-AI teamwork depends on AI systems that can adjust to the evolving needs and situations of users. Rather than relying on explicit instructions from the user, an adaptable agent can make use of implicit feedback from end-users to infer user's behavioral and situational needs. Implicit information, such as user activity and eye tracking data, can help infer behavioral patterns that uncover the user's desires, requirements, and mental states. This method allows AI systems to deliver more tailored, proactive and wholistic assistance, which not only minimizes user’s real-time workload, but also serves to add redundancy to human-error, much like a beneficial human teammate. While this approach offers several potential advantages, there are practical difficulties in gathering and interpreting the data. Upcoming efforts to deduce high-level actions from low-level data will need to tackle these challenges to facilitate intuitive human-AI interactions and improve theefficacy of collaborative systems.
Jaelle Scheuerman, Shannon Mcgarry, Ciara Sibley, Noelle Brown
Open Access
Article
Conference Proceedings
Information ergonomics and cognitive dissonance by AI in HUMINT/OSINT processes
The study explores the balance between enhancing situational awareness and maintaining good information ergonomics in AI-supported HUMINT/OSINT processes. The proposition for the experimental research was the leveraging effect of organizing and filtering as well as recognition algorithms in HUMINT and OSINT. Increased effectiveness of the due to less cognitive load and better fit to information processing. Especially repetitive activities as well as maintaining attention on several instances of critical events call for robust and explainable methods for information processing. The key issue is maintaining situational awareness on level of intelligence tasks as well as on the meta-level, i.e. organizing intelligence tasks. Simple algorithms and AI powered methods can enhance situational awareness in time-critical operations, but they may also cause cognitive dissonance as operators question the accuracy of the AI-provided information, leading to additional cognitive load and poor information ergonomic state. The results are based on constructive research process. Methods were designed by operators yet put into action by external developers. Experimental phase consisted of reanalysis of intelligence data and information. Validation in this setting is based of expert assessment, evidence on good functionality, and effect on information ergonomics. Acceptance and trust in AI are crucial to avoid cognitive dissonance and increased cognitive load and those factors are also discussed in the paper.
Jussi Okkonen, Mika Hyytiäinen, Mia Laine, Svante Laine, Tuuli Keskinen, Markku Turunen
Open Access
Article
Conference Proceedings
Enhancing Utility Customer Service and Compliance: An AI-Powered Approach to Call Analysis
This study presents a framework for analyzing customer service call transcripts using a Large Language Model (LLM) and unsupervised machine learning. We employed BERTopic to identify core topics from summarized transcripts, refined through an iterative process against internal best practices. The LLM then generated detailed call reasons and agent responses, mapped to standardized tags via an embedding model for consistency. This framework, implemented on a scalable GCP architecture with robust security measures, allows for granular root cause analysis and identification of customer sentiment trends. Evaluation of the LLM demonstrated high recall rates for topic detection and accuracy in generating summaries and call reasons. This approach enables proactive identification of customer needs, targeted agent coaching, and compliance risk mitigation, ultimately enhancing customer experience and operational efficiency.
Jonathan Presto, Kar Wai Lee
Open Access
Article
Conference Proceedings
A Comparison of ARIMA and XGBoost Models For Time Series Analysis Utilizing Human Behavioral Data
Time series modeling is a powerful tool utilized across multiple domains to assess the underlying stochastic mechanisms in a dataset or to predict future values based on past values in the series. Time series forecasting has been used for many applications including the stock market, healthcare, and environmental sciences. Traditional models like ARIMA struggle with more sophisticated datasets that may have non-linear patterns, whereas more advanced machine learning models were created to handle those relationships. Despite the wide range of uses for time series modeling, use in psychology is limited. We propose by better understanding these models’ forecasting abilities with human behavioral datasets, time series can be used in various psychological and human factors applications such as monitoring and predicting behavior for improved interface design. Our work uses this tool to predict future values in a specified time trial in two human behavioral datasets. We compare the performance of ARIMA models and XGBoost models to evaluate the strengths and weaknesses of both models and establish which model performed best in our chosen evaluation metrics. Overall, ARIMA had more favorable values across performance metrics in most conditions, although XGBoost models still had well-performing scores. Although the models in our work performed well, the data needed to possess a stable mean and variance to utilize them. This requirement led to a loss of the trend throughout the time trial that was unique to each conditions’ effect on participants. Future research can utilize what we learned to work towards predictive time series models that accurately capture the unique trend of human behavioral data for more enhanced interface design
Vivian Egan, Elizabeth Fox
Open Access
Article
Conference Proceedings
AI Tool Compliance Reporting: A Heuristic Analysis of Survey Data Using Natural Language Processing
This study examined how well New York City’s public AI tools reported good design practices for users. It analyzes 76 reports about algorithmic tools using a mix of computer methods (natural language processing), human review, and Nielsen's ten common heuristics for good usability, such as showing system status, giving users control, and providing help. The tools often followed some of these rules—especially those that support transparency, user control, and clear design. But others, like helping users prevent mistakes or reducing memory load, were rarely used. Agencies may be focusing more on making tools technically sound and less on making them easy and fair to use. We also looked at the language in the reports and found differences based on heuristic. Some used more formal or technical words, while others were simpler and more user-friendly. This study's findings confirm earlier ones that public trust in AI depends on transparency and fairness. More work is needed to include all users, especially regarding high-risk tools like those used in healthcare or law enforcement. Future studies should involve users and designers directly and look at tools across more sectors to improve design and fairness in public AI
Aimee Roundtree
Open Access
Article
Conference Proceedings
Assessment of Central Nervous System Fatigue in Mountain Rescuers Following a Simulated Winter Rescue
The aim of this study was to analyze central nervous system fatigue through the critical flicker fusion threshold (CFF) in mountain rescuers after a simulated winter rescue. Fifteen rescuers (13 men and 2 women; age: 32.1 ± 8.5 yr) participated in the study, which was conducted at the Bormio ski resort in Italy. The simulation included ascending to a simulated victim (~75 kg), victim packaging, and descending using rescue stretchers or sleds. The rescuers’ CFF was assessed before and after the simulation, and their effort during the task was monitored through heart rate measurements. Throughout the simulation, the rescuers maintained an average intensity of 79.4 ± 6.7% of their maximal heart rate, with no significant differences in effort between the ascent and descent phases (p > 0.05). The CFF, measured as an indicator of sensory and cognitive fatigue, showed baseline values of 42.9 ± 2.0 Hz and post-simulation values of 43.6 ± 2.5 Hz, with no significant changes (p > 0.05). This finding contrasts with previous hypotheses suggesting cognitive decline associated with fatigue following high-intensity tasks. The lack of significant changes could be attributed to the rescuers' experience, which allowed them to regulate their intensity and employ effective strategies to avoid excessive fatigue. Additionally, the moderate environmental conditions (~7°C) likely reduced thermal strain, contributing to the stability of the CFF results. In conclusion, no significant differences in CFF were observed following the rescue simulation, suggesting that the protocol conditions and the characteristics of the studied group mitigated cognitive fatigue. These findings emphasize the importance of specific training programs to optimize the performance of mountain rescuers in real-life conditions.
Belén Carballo Leyenda, Pilar Sánchez Collado, Fabio García-heras, Jorge Gutiérrez Arroyo, Juan Rodríguez Medina, Jose A Rodríguez-marroyo
Open Access
Article
Conference Proceedings
Climate Change Pulse: A RAG-Driven Interactive Platform for Exploring Disaster-Linked Climate Sentiment on Social Media
Public engagement with climate change tends to peak during extreme-weather events and dissipate soon thereafter, yet the quantitative relationships among spatial proximity, temporal context, and ideological stance remain under-explored. We present Climate Change Pulse (https://climatechangepulse.org), an open-access web platform that unifies a geovisual dashboard of 15 million climate-related tweets with 9094 EM-DAT disaster records and augments the interface with an agentic Retrieval-Augmented Generation (RAG) chatbot. The system allows users to pose natural-language questions (e.g., “What was climate-change sentiment around the 2021 German floods?”) and receive answers drawn dynamically from two in-memory SQLite databases.Research Questions: RQ1. How does spatial proximity to a disaster correlate with the polarity of climate-related tweets?RQ2. How do sentiment trends differ across stance categories (believer, neutral, denier) in the days surrounding an event?RQ3. Can an agentic RAG loop provide reliable conversational access to large, tabular climate datasets?System Architecture:A large-language model (LLM) receives database schemas plus the user prompt, generates SQL queries, and validates them through a three-retry error-feedback loop. A timeline widget enables year-by-year navigation, while selecting a disaster overlays the associated tweet cluster and surfaces embedded tweets for qualitative inspection.Experimental Design:Tweets were analysed in three concentric distance bands (≤ 500 km, 500–1 000 km, ≥ 1 000 km) and three temporal windows (1, 3, and 7 days before/after impact) for 730 climate-relevant disasters between 2006 and 2019. Sentiment and stance labels were taken from the publicly released Climate-Change-Twitter Dataset. Qualitative heat-maps and mean-sentiment plots were generated to visualise proximity and stance effects. The RAG component was exercised with a battery of representative user queries to confirm syntactic correctness and conversational fallback behaviour.Findings:Tweets originating in the ≤ 500 km band consistently expressed more negative sentiment than those posted at greater distances, supporting the intuition that proximity amplifies emotional response.Across all temporal windows, denier tweets were markedly more negative than believer or neutral tweets; the neutral category displayed the most stable sentiment.The agentic RAG loop successfully returned validated SQL and, when faced with an unanswerable request, provided an explanatory fallback, illustrating its suitability for interactive exploration.Contributions:Interactive AI for social computing: First integration of geovisual disaster analytics with a RAG chatbot, enabling conversational interrogation of multimillion-record climate corpora.Empirical insight: Qualitative evidence linking disaster proximity, ideological stance, and sentiment dynamics across thirteen years of Twitter data.Open science: Source code, processed datasets, and prompt templates released under an MIT licence to facilitate replication and extension (https://github.com/CCOh125/climatechangepulse.github.io).Future WorkOngoing efforts include real-time ingestion of live disaster feeds, fine-tuning domain-specific language models for richer affective nuance, and controlled user studies to assess decision-support value in crisis-communication contexts.
Alan Zheng, Carlos Gonzalez
Open Access
Article
Conference Proceedings
Artificial Intelligence Revolution in Healthcare: Enhancing Clinical Practice with a New Member of the Team
Artificial intelligence (AI) is a relatively new medical resource with the potential to revolutionize current practices in the prevention and treatment of disease. AI has been defined as computer programs accomplishing tasks traditionally associated with human intelligence such as learning and solving problems. As the ethical benefits of increased efficiency and productivity of AI systems are being realized, the consequences of implementing such transformative technologies has raised ethical and regulatory questions across the globe. AI represents a tool to address longstanding issues in healthcare delivery and can achieve a caliber of healthcare quality that was previously beyond our grasp. However, AI systems may incorporate and often amplify existing patterns of practice, including societal biases and inequitable healthcare practices. Surmounting these ethical and regulatory challenges represents the next frontier in the successful implementation of AI to promote human development and wellbeing. In this study, we examined the current literature and analyzed the scope of practice around the ethical and regulatory issues surrounding AI in medicine and its application to healthcare. Knowledge integration was performed across disciplines relevant to the potential role for AI in facilitating progress, innovation, and quality assurance in healthcare. Thematic analysis was conducted on qualitative data pertaining to both ethical and regulatory challenges concerning the implementation of AI into healthcare practices. The project provided exposure to the innovative field of AI and various strategies related to ethical issues, regulatory laws, quality improvement, and healthcare management. We explored both the reliability and current limitations of AI in order to create best practices guidelines designed to facilitate the successful incorporation of AI into healthcare fields. Ethical challenges of AI such as risk management, data security, and a lack of transparency span all sectors working to implement these new technologies. All medical disciplines working to leverage the potential applications of AI struggle with the ethical challenges of informed consent, autonomy, accountability, biases, and equitable healthcare delivery. The field of laboratory medicine and pathology was a pioneer in the implementation of AI technology. Laboratory medicine and pathology face additional hurdles when ensuring accurate interpretation of results such as unequal contexts, opportunity costs, and low levels of acceptable risk and uncertainty. Rather than an all-or-nothing approach, we suggest a stepwise, transparent, and patient-centered approach with clear boundaries to the incorporation of new tools. The AI-assisted era of medical care will be transformative but will never be void of all risk or ethical challenges. This work represents the first of many steps in using AI technology to optimize healthcare delivery in a way that protects and strengthens the ethical values of medical care.
Jay Kalra, Bryan Johnston
Open Access
Article
Conference Proceedings
Stethoscope to Algorithm: Equipping Tomorrow’s Doctors for Artificial Intelligence Driven Healthcare
Artificial Intelligence (AI) is transforming the delivery of patient-centred healthcare in Canada and around the globe. As the next generation of healthcare providers completes their medical education, it is critical to equip them with both digital literacy and the skills to effectively integrate AI into patient-centered care. In Canada, medical education is guided by the CanMEDS framework, which has recently transitioned to a competency-based medical education (CBME) model. CBME emphasizes outcomes-based learning, focusing on patient-centered care through direct observation and assessment of Entrustable Professional Activities (EPAs). These EPAs are specific, observable, and measurable units of professional practice, underpinned by milestones that track progression and facilitate continuous feedback to learners. The CBME framework is divided into four stages—transition to discipline, foundation, core, and transition to practice—and is structured around seven CanMEDS roles: Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar, and Professional. Despite the growing influence of AI in healthcare, there is a notable absence of AI-specific competencies for critically evaluating AI tools, interpreting AI-generated outputs, and safely and ethically integrating AI into clinical decision-making. To address these gaps, we propose the integration of AI-specific competencies into the CanMEDS framework. This integration should adopt a constructivist approach, leveraging active learning, case-based scenarios, simulations, and real-world experiences to prepare learners for the complexities of AI in clinical practice. These AI-specific competencies can be adapted for undergraduate medical education and tailored to align with the Royal College’s subspecialty groups, including imaging-based, internal medicine, surgery, pediatrics, critical care, obstetrics and gynecology, psychiatry, and other specialized areas. Central to this approach is the incorporation of feedback loops from both learners and instructors to ensure a sustained focus on patient-centered care. While concerns about cognitive load exist with the introduction of AI-specific competencies, AI’s generative capabilities can be harnessed for self-assessment and reflective practice, potentially mitigating this challenge. Through an exploration of global efforts to integrate AI into medical education, we identified gaps within the current CanMEDS framework and evaluated existing EPAs for Royal College subspecialties using Generative AI. Our findings highlight opportunities to embed AI competencies across training stages and milestones. Preliminary results suggest that the optimal strategy for integrating AI into the CanMEDS framework focuses on the core stage of resident training and the role of the Medical Expert. Rather than creating a new role centered on digital literacy and AI, we recommend augmenting the existing CanMEDS framework to incorporate these competencies. By leveraging the flexibility of the CanMEDS framework, we aim to establish AI-specific competencies that are measurable, progressive, and conducive to longitudinal learning and continuous feedback. This integration will prepare the next generation of healthcare providers to use AI safely and effectively in their practice while maintaining a patient-centered focus.
Jay Kalra, Bryan Johnston, Zoher Rafid-Hamed, Patrick Seitzinger
Open Access
Article
Conference Proceedings
AI-Assisted Creativity Support for Persona Creation Tools
Creativity is both a crucial and complex experiential process, valued not only for its outcomes but also for the ideation journey (O'Toole, 2024). With the increasing adoption of generative AI technologies, such as AI image generators, studies have shown that these tools can lead to fixation on initial examples, limiting idea diversity and originality (Wadinambiarachchi et al., 2024). To address this, designers must learn to identify and reflect on such fixation to enhance creative collaboration with AI. This study investigates how AI-assisted tools, through interface design and interactive features, can better support the creativity of design students in persona creation. The research employs an experimental design involving task tests with two types of AI-assisted persona creation tools. Evaluations using the Creativity Support Index (Cherry et al., 2014) and post-task interviews provide insights into user experience and creativity performance. Preliminary results indicate that both tools effectively support exploratory creativity, with participants viewing AI as a collaborative assistant rather than a dominant force. Additionally, the findings highlight distinct AI support needs across different stages of the creative process, such as inspiration generation in early ideation and refinement support in later stages. This study emphasizes the importance of designing AI tools that address diverse user needs and align with the phased nature of the creative process. The results contribute to design education by offering new perspectives on AI’s role in creativity and provide practical implications for developing AI-assisted tools that foster innovative workflows.
Shih Ju Wang, Chien-Hsiung Chen
Open Access
Article
Conference Proceedings
Art and Emotion in the Age of AI: Understanding Human Engagement with AI-Generated and Traditional Art
AI-generated art has become a part of our daily lives, from website illustrations to art exhibitions; generative AI is increasingly influencing traditional art and human-made design. However, there is limited research exploring the impact of AI-generated art on human emotions and aesthetics. This study aims to analyze how people of different age groups perceive and engage with AI-generated art compared to traditional art, exploring the emotional connections they establish with each type of artwork and how these connections vary based on their backgrounds and experiences. In this study, the primary emotions defined by the Geneva Emotional Wheel are employed to analyze the emotional responses of respondents toward both AI-generated art and traditional art. The results indicate that most respondents favor traditional art and feel wider emotional resonance towards it compared to AI-generated art. However, when respondents are presented with a choice between traditional art and AI-generated art without being informed of their origins, AI-generated artworks emerge as the top choice. These results suggests that further exploration into the emotional and aesthetic dimensions of AI-generated art is essential for understanding its potential future acceptance.
Anastasiia Fomina, Yan Luo
Open Access
Article
Conference Proceedings
Artificial Intelligence in Self-Service: Ushering a New Era of Customer Interaction
In an increasingly digital world, the integration of Artificial Intelligence (AI) in self-service solutions is becoming a critical success factor for organizations especially companies offering services. This paper explores both the challenges and opportunities associated with using AI in self-service systems supporting customer service employees. By automating routine inquiries, companies would increase efficiency as well as increase customer satisfaction through personalized and prompt responses. However, issues of data security and privacy needs to be addressed. This paper studies the impact of AI-powered self-services on the customer satisfaction and employee productivity in the service industry. The paper will provide practical insights into successful implementation strategies of self-services and AI. The paper demonstrates how companies can benefit from the synergy between human expertise and AI technology. The case studies reveal that a successful implementation of AI Self-Services requires a prerequisite digitalization level, employee skills, and agile development mindset. The focus is on analyzing case studies that illustrate the transformative power of AI in customer service. Finally, future trends and developments that could shape the service industry will be discussed. The study concludes that AI-powered self-service solutions can significantly enhance customer service operations when implemented strategically.
Jürgen Müller, Abdulrahman Abdulrazek
Open Access
Article
Conference Proceedings
Cloud Computing Innovations in the Financial Services Industry: Benefits, Challenges, And Opportunities.
Cloud computing has become a catalyst for change in the financial services industry. To be competitive, organizations are realizing that cloud computing is becoming an essential tool in a constantly evolving digital environment. For companies in the financial sector looking to stay flexible and competitive in the face of global upheaval, cloud computing has become significant in this digital age (Carr, Pujazon, & Vazquez, 2018). With customers seeking more digital solutions, meeting these demands efficiently and effectively can require significant amounts of resources and time. Cloud computing can help organizations develop innovative solutions within their resource constraints, and do so in a more timely fashion. To understand cloud computing’s impact on financial services so it can be leveraged effectively, it will be important to identify the benefits, challenges, and future opportunities.This study seeks to achieve the following. First, we strive to assess the impact of cloud computing on innovative processes within financial services organizations and highlight the benefits. Second, we intend to examine the challenges and obstacles that financial institutions encounter in implementing and using cloud solutions. After achieving the aforementioned objectives, the study aims to offer a thorough grasp of how cloud computing is revolutionizing the financial services industry and identify future opportunities.To accomplish these objectives, we will conduct a literature review and thematic analysis (Braun & Clarke, 2006). The results will identify the benefits, challenges, and future opportunities of cloud computing in the financial services industry. We will discuss cloud technologies’ ability to improve essential aspects such as consumer engagement, organizational agility, providing new services, cutting costs, and operational efficiencies. Other benefits to be addressed include cloud-based platforms assisting in innovation by enabling financial product development cycles to be completed more quickly, successfully meeting new consumer demands. We will examine the challenges as well such as issues associated with storing sensitive data on the cloud raising the danger of data leaks and cyberattacks. Other challenges to be addressed include software identity management, perceived low-security levels, incompatible infrastructure, and compliance issues.Future opportunities to be addressed will include effective technology management and rigorous security assessments to address the previously noted problems. Other opportunities to be highlighted will include implementing emerging technologies such as artificial intelligence for fraud detection. This paper intends to contributerecommendations for practitioners to capitalize on the benefits, address the challenges, and seize opportunities that exist or are coming into existence. We intend to summarize our findings into a framework that practitioners and researchers can utilize in their endeavors. Research topics will also be proposed, such as focusing on methods of removing the identified obstacles and investigating new strategic approaches.REFERENCESBraun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology (3:2), pp. 77-101.Carr, B. P. D. V. J., Pujazon, D., & Vazquez, J. (2018). Cloud computing in the financial sector part 1: an essential enabler. Institute of International Finance.
Elizabeth Baidoo, Brenda Eschenbrenner
Open Access
Article
Conference Proceedings
Artificial Intelligence and Media Literacy - Navigating Information in a Digital World
Artificial intelligence (AI) is playing an increasingly important role in the media ecosystem, transforming both the creation and distribution of content. At the same time, the growing influence of AI raises questions about media literacy as a key aspect of critical thinking in the digital age. This article explores the relationship between AI and media literacy by analyzing how automated technologies shape information perception, fake news detection, and critical content evaluation skills. The article combines theoretical review and empirical research to identify the main challenges and opportunities in the field. The focus is on the interaction between AI tools, such as content recommendation algorithms and generative models, and the ability of users to analyze, interpret, and create information. It analyzes the possibilities of using AI as a means of improving media literacy through interactive learning platforms, capabilities for identifying prejudice and hate speech, and as a support tool in journalistic work for effective and rapid data analysis and fact-checking.
Lora Metanova, Neli Velinova
Open Access
Article
Conference Proceedings
Preliminary Survey on Trust Levels in AI-Clinical Decision Support Systems Among Medical Professionals
Artificial Intelligence-based Clinical Decision Support Systems (AI-CDSS) have the potential to enhance clinical decision-making. However, trust remains a critical challenge influencing their adoption, and the specific direction of trust among medical professionals remains unclear. This study aims to provide empirical evidence on current trust levels in AI-CDSS among medical professionals. A revised version of questionnaire measuring trust in automation was utilized, employing a five-point Likert scale. A total of 29 Thai medical professionals, including both junior and senior practitioners, participated in this study. The findings reveal a spectrum of trust levels, with an average trust score of 3.05 (SD = 0.44). The majority of participants exhibited moderate trust; however, there were tendencies of undertrust and overtrust toward AI-CDSS in 10.34% and 27.59% of participants, respectively. Concerns regarding the capability, reliability, and transparency of AI-CDSS were identified as key barriers to trust. These findings provide valuable insights into trust perceptions, contributing to the development of more trustworthy AI-CDSS solutions and informing strategies for their effective integration into clinical practice.
Yada Sriviboon, Arisara Jiamsanguanwong, Ornthicha Suphattanaporn
Open Access
Article
Conference Proceedings
A Tool to Complement Human Intelligence: the Math Behind Human Indispensibility
Much ink has been spilled recently on the existential risks and potential of Artificial Intelligence. Between breathy utopian think-pieces and apocalyptic proclamations of the end of meaning in human life, an entire spectrum of outlooks muddies the waters on insight-driven and human-focused paths forward. While philosophical musings and abstract plans are prevalent, relatively little attention has been paid to underwriting integrative deployment as a problem which yields to analysis. The question 'when should an autonomous system step in' is typically framed as demanding a comprehensive world-model of the human subject- oppositional defiance and counter-picking make this approach undesirable, turning the human and AI against one another. Instead, by combining operationalization from psychology, Pareto optimality from economics, norm-based stability from robust controls, and shortest-path algorithms from graph theory, we are able to present mathematically robust conditions under which heterogenous systems provide superior performance to unitary agents, guaranteeing a lower bound on efficacy of joint human/AI teams endorsed by relative advantage. We also derive implicit conditions under which such relationships hold, finding them to be of geometrically increasing scope as task complexity increases. Finally, we demonstrate these relations are not merely theoretical, using sample tasks with adversarial complexity to challenge the assignment paradigm, and find the results to remain within an order-of-magnitude of the predicted robustness condition.
Christopher Robinson, Joshua Lancaster
Open Access
Article
Conference Proceedings
Ethical Dilemmass and Regulations of Artificial Intelligence Under the Perspective of Nietzsche’s Superman Philosophy Based on the Alien: Romulus
The paper analyzes the ethical dilemmas and regulations of AI from five perspectives, grounded in Nietzsche's Übermensch philosophy. The aim is to provide theoretical support and multiple case studies for the establishment of an ethical order in AI. Firstly, it addresses kinship ethics dilemmas and regulations concerning AI within the context of dual ethics. This section examines the ethical challenges posed by AI through both human-centric and alien-centric lenses. It raises critical questions regarding whether kinship ethics should be predicated on a clear distinction between humans and animals. Secondly, it explores social ethics dilemmas and regulations related to AI against the backdrop of artificial ethics and professional standards. Ethical judgments alongside legal boundaries give rise to significant social ethical challenges associated with AI. The technology exacerbates social inequalities while undermining principles of equal opportunity. This segment poses essential inquiries about whether social ethics necessitate that AI assumes certain social responsibilities, as well as what specific forms these responsibilities should take. Should we redefine social ethics based on fairness when individual circumstances are similar, or justice when they differ? Thirdly, it delves into political ethics dilemmas and regulations pertaining to AI within contexts shaped by imagery suggestions and ethical reinforcements. Using Alien: Romulus as a case study, this section discusses how one might explore the moral imagination surrounding AI driven by its moral autonomy through visual representations. It raises pertinent questions about whether loyalty and integrity constitute political or ethical obligations that must be addressed by AI systems. Fourthly, national ethics dilemmas concerning regulation of AI are examined in light of digital transformation processes and advancements in technological security development. This part prompts reflection on whether a great power’s responsibilities represent a matter of national ethics for AI. Finally, the ethical dilemmas surrounding Earth and the regulation of AI within the framework of bioethics and ethical care are examined. This paper poses the question of whether concepts such as co-assistance, co-integration, co-sharing, and co-prosperity represent essential issues of Earth ethics that AI must address.
Yazhou Chen
Open Access
Article
Conference Proceedings
Perceptions and Usage of AI-based Technology among Preschool Children in Bulgaria
This study investigates the attitudes of parents and teachers of preschool children (aged 3–6 years) toward the emergence and use of AI-based technologies, with a focus on Bulgaria. Utilizing an online survey format, data were collected during October 2024 from parents and teachers (N=150), primarily residing in urban areas. The findings reveal that while AI-driven technologies such as smartphones, tablets, and smart TVs are integrated into children’s daily life, newer AI tools like virtual assistants and creative AI applications remain underutilized, especially in kindergartens. Teachers primarily use AI-related tools for educational purposes, such as e-blackboards and multimedia, but report limited training and information about emerging AI technologies. Parents were found to be more open to integrating AI-based tools at home, though primarily for practical applications relevant to daily activities. Both groups expressed dissatisfaction with the existing regulatory frameworkin the country, citing inadequacies in policies addressing the challenges of AI usage for vulnerable age groups. The study highlights the importance of a more inclusive approach to understanding AI exposure among children, as well as the need for targeted policy reforms and training programs. The findings contribute to ongoing discussions about integrating AI into early childhood education and provide actionable insights for educators, parents, and policymakers.
Lyubomir Kolarov
Open Access
Article
Conference Proceedings
AI vs. Authentic: Decoding Architectural Imagery
As AI becomes increasingly integrated into design processes, accurately distinguishing AI-generated architectural images from real photographs is crucial for effective communication and decision-making in the field. Aim: This study explored how experienced designers perceive and identify AI-generated images, focusing on the challenges they encounter and the visual cues they rely on to assess authenticity. Method: Employing a mixed methods approach, five designers (1–20 years of experience) from a single firm participated in an hour-long focus group session on the Miro platform. They examined 16 images—8 AI-generated and 8 real—and were asked to identify AI-generated visuals. Annotations and discussions were thematically analyzed to capture participants’ decision-making processes and patterns of observation. Result: Overall, participants correctly classified 65% of exterior images and 70% of interior images. Analysis revealed five recurrent themes: subtle distortions in spatial elements, distorted or “demon-like” human features, warped backgrounds and inconsistent perspectives, over-perfection that lacked real-world imperfections, and reliance on professional domain knowledge. Night shots and images containing people presented consistent difficulties, while architectural expertise bolstered participants’ confidence in detecting anomalies. Limitation: Time constraints, limited zoom functionality on the Miro platform, and occasional confusion with voting mechanics potentially reduced thoroughness and accuracy. Environmental factors, including early-finishers discussing progress, introduced additional distractions that may have biased responses. Conclusion: These findings highlight how architectural expertise, image content, and technological constraints shape the process of identifying AI-generated images. As part of a broader ongoing study also including participants without an architectural background, this research underscores the importance of examining how diverse user groups approach AI-generated visual content.
Hamid Estejab, Sara Bayramzadeh
Open Access
Article
Conference Proceedings