Artificial Intelligence and Social Computing

book-cover

Editors: Tareq Ahram, Jay Kalra, Waldemar Karwowski

Topics: Artificial Intelligence & Computing

Publication Date: 2022

ISBN: 978-1-958651-04-9

DOI: 10.54941/ahfe1001439

Articles

Won’t you see my neighbor? User predictions, mental models, and similarity-based explanations of AI classifiers

Humans should be able work more effectively with artificial intelligence-based systems when they can predict likely failures and form useful mental models of how the systems work (Johnson, et al. 2014, Klein, at al. 2005, Bansal et al. 2019, Tomsett, et al. 2020). We conducted a study of people’s mental models of artificial intelligence systems using a high-performing image classifier, focusing on participants’ ability to predict the classification result for a particular image. Participants viewed images in one of two classes and then predicted whether the classifier would label them correctly and indicated their confidence in their predictions. Participants also provided their own assessment of the correct class. In this experiment, we explored the effect of giving participants additional information. We showed them an array of the image’s nearest neighbors in a space representing the otherwise uninterpretable features extracted by the lower layers of the classifier’s neural network, using t-distributed stochastic neighbor embedding. We found that providing this neighborhood information did increase participants’ prediction performance, and that the performance improvement could be related to the neighbor images’ similarity to the target image. We also found indications that the presentation of this information may influence people’s own classification of the target image; in some cases after viewing the image’s neighbors, participants’ accuracy in identifying the actual class of the image was significantly worse when given the additional information, particularly when the set of neighbor images included images from the incorrect class. They became “mechanomorphized” in their own judgements, rather than anthropomorphizing the classifier’s process. There was also a significant relationship between reported confidence in predictions and accuracy, indicating that at a given level of confidence, participants in a control condition were significantly less accurate than experimental participants. In addition to the differences in mental models suggested by prediction accuracy and confidence, participants in the control and experimental conditions differed in how the described their mental models in comments on the image stimuli. Participants with less information tended to discuss image details, whereas those with more seemingly tried to find a pattern across the images and so focused more on the classifier itself and less on the image.

Kimberly Glasgow, Jonathon Kopecky, John Gersh, Adam Crego
Open Access
Article
Conference Proceedings

Using Artificial Intelligence to Improve Human Performance: A Predictive Management Strategy

In this paper, we introduce the novel concept of predictive management designed to support managers and their teams with achieving their long-term goals by adopting a new and sustainable AI and human-based approach that aims to identify a team's mood during short human-based control cycles. Predictive management helps managers, team leaders and employees to become more aware of the mood of a team and its members’ feelings by using AI, sentiment analysis and emotion detection. This allows managers to identify issues and solve them together with the team during short control cycles and thus maintain a productive workflow, instead of being overwhelmed by them and risking worsening the corporate performance.

Fabrizio Palmas
Open Access
Article
Conference Proceedings

Robust AI for Accident Diagnosis of Nuclear Power Plants Using Meta-Learning

Application with artificial intelligence (AI) techniques is considered for nuclear power plants (NPPs) that seem to be the last industry of the technology. The application includes accident diagnosis, automatic control, and decision support to reduce the operator’s burden. The most critical problem in their application is the lack of actual plant data to train and validate the AI algorithms. It is very difficult to collect the data from operating NPPs and even more to obtain the data about accidents in NPPs because those situations are very rare. For this reason, most of the studies on the AI applications to NPPs rely on the simulator that is software to mimic NPPs. However, it is highly uncertain that an AI algorithm that is trained by using a simulator can still work well for the actual NPP. This study suggests a Robust AI algorithm for diagnosing accidents in NPPs. The Robust AI is trained by the data collected in an environment (e.g., simulator) and can work under a similar but not exactly the same environment (e.g., actual NPP). Robust AI algorithm applies the Prototypical Network (PN), which is a kind of Meta-learning to extract major features from a few datasets and learn by these features. The PN learns a metric space in which classification can be performed by computing distances to prototype representations of each class. With the PN, the Robust AI algorithm extracts symptoms from the training data in the accident and uses these symptoms in the training of diagnosing accidents. The symptoms of accidents are almost identical between the simulator and the actual NPP, although the parametric values can be different. The suggested Robust AI algorithm is trained using a simulator and tested using another simulator of a different plant type, which is considered an actual plant. The experiment result shows that the Robust AI algorithm can properly diagnose accidents in different environments.

Deail Lee, Heejae Lee, Jonghyun Kim
Open Access
Article
Conference Proceedings

Detection of inappropriate images on smartphones based on computer vision techniques

In recent years, the use of smartphones in children and adolescents has increased by a considerable number and, therefore, the dangers faced by this population are increasing. Due to this, it is important to develop a technological solution that allows combat this problem by making use of computer vision. Through a bibliographic review, it has been detected those children and adolescents frequently view violent and pornographic images, this allowed us to build a dataset of this type of images to develop an artificial intelligence model. It was successfully developed under the training and validation phases using a google supercomputer (Google Colab), while for the testing phase it was implemented on an android mobile device, using screenshots, images were extracted that the screen projected, and thus later the results were analyzed under statistics using R studio. The computational model detected, with a large percentage of true positives, images and videos of a pornographic and violent nature captured from the screen resolution of a smartphone while the user was using it normally.

Daisy Imbaquingo, Macarthur Ortega-Bustamante, José Jácome, Tatyana Saltos-Echeverria, Roger Vaca
Open Access
Article
Conference Proceedings

Econometric Modeling for the Management and Decomposition of Financial Risk

This research presents a methodological analysis that will allow to actively manage the risk of financial assets, through an understandable study and mix of technical differences used by the financial literature. In this way, the research will allow the delivery of precise information on the risk-generating components of the assets studied. The methodology used corresponds to the wavelet decomposition method, combined with the VaR methodology, which as a whole proves to be an efficient way of controlling the financial risk of the investment portfolios used, thus allowing to identify the main risk generating components to which it is applied. investors and fund managers submit.

Rolando Rubilar Torrealba, Karime Chahuán Jiménez, Hanns De La Fuente-Mella
Open Access
Article
Conference Proceedings

Artificial vision system to detect the mood of an Alzheimer's patient

Dementia is a brain disorder that affects older individuals in their ability to carry out their daily activities, such as in the case of neurological diseases. The main objective of this study is to automatically classify the mood of an Alzheimer's patient into one of the following categories: wandering, nervous, depressed, disoriented, bored or normal i.e. in Alzheimer's patients from videos obtained in nursing homes for the elderly in the canton of Ambato, Ecuador. We worked with a population of 39 people from both sexes who were diagnosed with Alzheimer's and whose ages ranged between 75 and 89 years of age. The methods used are pose detection, feature extraction, and pose classification. This was achieved with the usage of neural networks, the walk classifier, and the Levenshtein Distance metric. As a result, a sequence of moods is generated, which determine a relationship between the software and the human expert for the expected effect. It is concluded that artificial vision software allows us to recognize the mood states of the Alzheimer patients during pose changes over time.

David Ricardo Castillo Salazar, Laura Lanzarini, Héctor Gómez Alvarado, José Varela-Aldás
Open Access
Article
Conference Proceedings

Analysis of citizen's sentiment towards Philippine administration's intervention against COVID-19

The COVID-19 pandemic affected the world. The World Health Organization or WHO issued guidelines the public must follow to prevent the spread of the disease. This includes social distancing, the wearing of facemasks, and regular washing of hands. These guidelines served as the basis for formulating policies by countries affected by the pandemic. In the Philippines, the government implemented different initiatives, following the guidelines of WHO, that aimed to mitigate the effect of the pandemic in the country. Some of the initiatives formulated by the administration include international and domestic travel restrictions, community quarantine, suspension of face-to-face classes and work arrangements, and phased reopening of the Philippine economy to name a few. The initiatives implemented by the government during the surge of COVID-19 disease have resulted in varying reactions from the citizens. The citizens expressed their reactions to these initiatives using different social media platforms such as Twitter and Facebook. The reactions expressed using these social media platforms were used to analyze the sentiment of the citizens towards the initiatives implemented by the government during the pandemic. In this study, a Bidirectional Recurrent Neural Network-Long Short-term memory - Support Vector Machine (BRNN-LSTM-SVM) hybrid sentiment classifier model was used to determine the sentiments of the Philippine public toward the initiatives of the Philippine government to mitigate the effects of the COVID-19 pandemic. The dataset used was collected and extracted from Facebook and Twitter using API and www.exportcomments.com from March 2020 to August 2020. 25% of the dataset was manually annotated by two human annotators. The manually annotated dataset was used to build the COVID-19 context-based sentiment lexicon, which was later used to determine the polarity of each document. Since the dataset contained unstructured and noisy data, preprocessing activities such as conversion to lowercase characters, removal of stopwords, removal of usernames and pure digit texts, and translation to the English language were performed. The preprocessed dataset was vectorized using Glove word embedding and was used to train and test the performance of the proposed model. The performance of the Hybrid BRNN-LSTM-SVM model was compared to BRNN-LSTM and SVM by performing experiments using the preprocessed dataset. The results show that the Hybrid BRNN-LSTM-SVM model, which gained 95% accuracy for the Facebook dataset and 93% accuracy for the Twitter dataset, outperformed the Support Vector Machine (SVM) sentiment model whose accuracy only ranges from 89% to 91% for both datasets. The results indicate that the citizens harbor negative sentiments towards the initiatives of the government in mitigating the effect of the COVID-19 pandemic. The results of the study may be used in reviewing the initiatives imposed during the pandemic to determine the issues which concern the citizens. This may help policymakers formulate guidelines that may address the problems encountered during a pandemic. Further studies may be conducted to analyze the sentiment of the public regarding the implementation of limited face-to-face classes for tertiary education, implementing lesser restrictions, vaccination programs in the country, and other related initiatives that the government continues to implement during the COVID-19 pandemic.

Matthew John Sino Cruz, Marlene De Leon
Open Access
Article
Conference Proceedings

The Effect of Varying Levels of Automation during Initial Triage of Intrusion Detection

With unrestrained optimism regarding the possibilities of artificial intelligence (AI) exceeding its actualization, AI developers are under increasing pressure to integrate AI into complex human decision-making tasks without fully understanding the implications of this automation. To investigate how automation may influence human performance in a high workload environment, this study utilizes a triage scenario from intrusion detection using a simulated SNORT interface. Participants classify a series of time-sensitive alerts as real intrusions or false alarms with the assistance of varying levels of automation (LOA) from no automation to fully autonomous. Preliminary results showed that participants tend to prefer and have some performance benefits with intermediate levels of automation.

Daniel Cassenti, Aayushi Roy, Thom Hawkins, Robert Thomson
Open Access
Article
Conference Proceedings

Generating a Multimodal Dataset Using a Feature Extraction Toolkit for Wearable and Machine Learning: A pilot study

Studies for stress and student performance with multimodal sensor measurements have been a recent topic of discussion among research educators. With the advances in computational hardware and the use of Machine learning strategies, scholars can now deal with data of high dimensionality and provide a way to predict new estimates for future research designs. In this paper, the process to generate and obtain a multimodal dataset including physiological measurements (e.g., electrodermal activity- EDA) from wearable devices is presented. Through the use of a Feature Generation Toolkit for Wearable Data, the time to extract clean and generate the data was reduced. A machine learning model from an openly available multimodal dataset was developed and results were compared against previous studies to evaluate the utility of these approaches and tools. Keywords: Engineering Education, Physiological Sensing, Student Performance, Machine Learning, Multimodal, FLIRT, WESAD

Edwin Marte Zorrilla, Idalis Villanueva, Jenefer Husman, Matthew Graham
Open Access
Article
Conference Proceedings

Hepatitis predictive analysis model through deep learning using neural networks based on patient history

First of all, one of the applications of artificial intelligence is the prediction of diseases, including hepatitis. Hepatitis has been a recurring disease over the years as it seriously affects the population, increasing by 125,000 deaths per year. This process of inflammation and damage to the organ affects its performance, as well as the functioning of the other organs in the body. In this work, an analysis of variables and their influence on the objective variable is made, in addition, results are presented from a predictive model.We propose a predictive analysis model that incorporates artificial neural networks and we have compared this prediction method with other classification-oriented models such as support vector machines (SVM) and genetic algorithms. We have conducted our method as a classification problem. This method requires a prior process of data processing and exploratory analysis to identify the variables or factors that directly influence this type of disease. In this way, we will be able to identify the variables that intervene in the development of this disease and that affect the liver or the correct functioning of this organ, presenting discomfort to the human body, as well as complications such as liver failure or liver cancer. Our model is structured in the following steps: first, data extraction is performed, which was collected from the machine learning repository of the University of California at Irvine (UCI). Then these data go through a variable transformation process. Subsequently, it is processed with learning and optimization through a neural network. The optimization (fine-tuning) is performed in three phases: complication hyperparameter optimization, neural network layer density optimization, and finally dropout regularization optimization. Finally, the visualization and analysis of results is carried out. We have used a data set of patient medical records, among the variables are: age, sex, gender, hemoglobin, etc. We have found factors related either indirectly or directly to the disease. The results of the model are presented according to the quality measures: Recall, Precision and MAE.We can say that this research leaves the doors open to new challenges such as new implementations within the field of medicine, not only focused on the liver, but also being able to extend the development environment to other applications and organs of the human body in order to avoid risks possible, or future complications. It should be noted that the future of applications with the use of artificial neural networks is constantly evolving, the application of improved models such as the use of random forests, assembly algorithms show a great capacity for application both in biomedical engineering and in focused areas to the analysis of different types of medical images.

Jorge Pizarro, Byron Vásquez, Willan Steven Mendieta Molina, Remigio Hurtado
Open Access
Article
Conference Proceedings

An analysis model for Machine Learning using Support Vector Machine for the prediction of Diabetic Retinopathy

Diabetic Retinopathy is a public health disease worldwide, which shows that around one percent of the population suffers from this disease. Likewise, another one percent of patients in the population suffer from this disease, but it is not diagnosed. It is estimated that, within three years, millions of people will suffer from this disease. This will increase the percentage of vascular, ophthalmological and neurological complications, which will translate into premature deaths and deterioration in the quality of life of patients. That is why we face a great challenge, which is to predict and detect the signs of diabetic retinopathy at an early stage.For this reason, this paper presents a Machine Learning model focused on the optimization of a classification method using support vector machines for the early prediction of Diabetic Retinopathy. The optimization of the support vector machine consists of adjusting parameters such as: separation margin penalty between support vectors, separation kernel, among others. This method has been trained using an image dataset called Messidor. In this way, the extraction and preprocessing of the data is carried out to carry out a descriptive analysis and obtain the most relevant variables through supervised learning. In this sense, we can see that the most outstanding variables for the risk of diabetic retinopathy are type 1 diabetes and type 2 diabetes.For the evaluation of the proposed method we have used quality measures such as: MAE, MSE, RSME, but the most important are Accuracy, Precision, Recall and F1 for the optimization of classification problems. Therefore, to show the efficacy and effectiveness of the proposed method, we have used a public database, which has allowed us to accurately predict the signs of diabetic retinopathy. Our method has been compared with other relevant methods in classification problems, such as neural networks and genetic algorithms. The support vector machine has proven to be the best for its accuracy.In the state of the art, the works related to Diabetic Retinopathy are presented, as well as the outstanding works with respect to Machine Learning and especially the most outstanding works in Support Vector Machines. We have described the main parameters of the method and also the general process of the algorithm with the description of each step of the analysis model. We have included the values of hyper parameters experienced in the compared methods. In this way we present the best values of the parameters that have generated the best results.Finally, the most relevant results and the corresponding analysis are presented, where the results of the comparison made with the methods of Neural Networks, SVM and Genetic Algorithm will be evidenced. This study gives way to future research related to diabetic retinopathy with the aim of conjecturing the information and thus seeking a better solution.

Remigio Hurtado, Janneth Matute, Juan Boni
Open Access
Article
Conference Proceedings

Supradyadic Trust in Artificial Intelligence

There is a considerable body of research on trust in Artificial Intelligence (AI). Trust has been viewed almost exclusively as a dyadic construct, where it is a function of various factors between the user and the agent, mediated by the context of the environment. A recent study has found several cases of supradyadic trust interactions, where a user’s trust in the AI is affected by how other people interact with the agent, above and beyond endorsements or reputation. An analysis of these surpradyadic interactions is presented, along with a discussion of practical considerations for AI developers, and an argument for more complex representations of trust in AI.

Stephen Dorton
Open Access
Article
Conference Proceedings

Artificial Intelligence in aviation decision making process.The transition from extended Minimum Crew Operations to Single Pilot Operations (SiPO)

Innovation, management of change, and human factors implementation in-flight operations portray the aviation industry. The International Air Transportation Authority (IATA) Technology Roadmap (IATA, 2019) and European Aviation Safety Agency (EASA) Artificial Intelligence (A.I.) roadmap propose an outline and assessment of ongoing technology prospects, which change the aviation environment with the implementation of A.I. and introduction of extended Minimum Crew Operations (eMCO) and Single Pilot Operations (SiPO). Changes in the workload will affect human performance and the decision-making process. The research accepted the universally established definition in the A.I. approach of “any technology that appears to emulate the performance of a human” (EASA, 2020). A review of the existing literature on Direct Voice Inputs (DVI) applications structured A.I. aviation decision-making research themes in cockpit design and users’ perception - experience. Interviews with Subject Matter Experts (Human Factors analysts, A.I. analysts, airline managers, examiners, instructors, qualified pilots, pilots under training) and questionnaires (disseminated to a group of professional pilots and pilots under training) examined A.I. implementation in cockpit design and operations. Results were analyzed and evaluated the suitability and significant differences of e-MCO and SiPO under the decision-making aspect.Keywords: Artificial Intelligence (A.I.), Extended Minimum Crew Operations (e-MCO), Single Pilot Operations (SiPO), cockpit design, ergonomics, decision making.

Dimitrios Ziakkas, Anastasios Plioutsias, Konstantinos Pechlivanis
Open Access
Article
Conference Proceedings

I Am What I Am – Roles for Artificial Intelligence from the Users’ Perspective

With increasing digitization, intelligent software systems are taking over more tasks in everyday human life, both in private and professional contexts. So-called artificial intelligence (AI) ranges from subtle and often unnoticed improvements in daily life, optimizations in data evaluation, assistance systems with which the people interact directly, to perhaps artificial anthropomorphic entities in the future. How-ever, no etiquette yet exists for integrating AI into the human living environment, which has evolved over millennia for human interaction. This paper addresses what roles AI may take, what knowledge AI may have, and how this is influenced by user characteristics. The results show that roles with personal relationships, such as an AI as a friend or partner, are not preferred by users. The higher the confidence in an AI's handling of data, the more likely personal roles are seen as an option for the AI, while the preference for subordinate roles, such as an AI as a servant or a subject, depends on general technology acceptance and belief in a dangerous world. The role attribution is independent from the usage intention and the semantic perception of artificial intelligence, which differs only slightly, e.g., in terms of morality and controllability, from the perception of human intelligence.

Ralf Philipsen, Philipp Brauner, Hannah Biermann, Martina Ziefle
Open Access
Article
Conference Proceedings

Faulty Signal Restoration Algorithm in the Emergency Situation Using Deep Learning Methods

To operate nuclear power plants (NPPs) safely and efficiently, signals from sensors must be valid and accurate. Signals deliver the current situation and status of the system to the operator or systems that use them as inputs. Therefore, faulty signals may degrade the performance of both control systems and operators in the emergency situation, as learned from past accidents at NPPs. Moreover, With the increasing interest in autonomous and automatic controls, the integrity and reliability of input signals becomes important for the successful control. This study proposes an algorithm for the faulty signal restoration under emergency situations using deep convolutional generative adversarial networks (DCGAN) that generates a new data from random noise using two networks (i.e., generator and discriminator). To restore faulty signals, the algorithm receives a faulty signal as an input and generates a normal signal using a pre-trained normal signal distribution. This study also suggests optimization steps to improve the performance of the algorithm. The optimization consists of three steps; 1) selection of optimal inputs, 2) determine of the hyper-parameters for DCGAN. Then, the data for implementation and optimization are collected by using a Compact Nuclear Simulator (CNS) developed by the Korea Atomic Energy Research Institute (KAERI). To reflect the characteristics of actual signals in NPPs, Gaussian noise with a 5% standard deviation is also added to the data.

Younhee Choi, Jonghyun Kim
Open Access
Article
Conference Proceedings

Toward Understanding the Use of Centralized Exchanges for Decentralized Cryptocurrency

Cryptocurrency has been extensively studied as a decentralized financial technology built on blockchain. However, there is a lack of understanding of user experience with cryptocurrency exchanges, the main means for novice users to interact with cryptocurrency. We conduct a qualitative study to provide a panoramic view of user experience and security perception of exchanges. All 15 Chinese participants mainly use centralized exchanges (CEX) instead of decentralized exchanges (DEX) to trade decentralized cryptocurrency, which is paradoxical. A closer examination reveals that CEXes provide better usability and charge lower transaction fee than DEXes. Country-specific security perceptions are observed. Though DEXes provide better anonymity and privacy protection, and are free of governmental regulation, these are not necessary features for many participants. Based on the findings, we propose design implications to make cryptocurrency trading more decentralized.

Zhixuan Zhou, Bohui Shen
Open Access
Article
Conference Proceedings

Artificial intelligence in B2B sales: Impact on the sales process

Digitalization is a driving force for innovation in the business-to-business (B2B) environment and profoundly changes the way companies do business. It affects the entire value chain of a company and can be used for automating human tasks. For instance, previous research indicates that 40% of all sales tasks can be automated. Thus the digital transformation in sales has the potential to improve a firm’s performance. Depending on its development level, digitalization in sales can assist or even replace numerous sales tasks. Therefore, using digital solutions in sales can be seen as an essential trigger to competitive advantage. Recent developments in research and practice have revealed that especially artificial intelligence (AI) has gained increasing attention in the sales domain. A challenging issue in this domain is how AI affects the sales process and how it can be applied meaningfully in B2B sales. Thus, our paper aims to investigate how AI can be used along the sales process and how it can improve sales practices.To explore this, we conduct systematic literature research in scientific databases such as Business Source Premier, Science Direct, Emerald, Springer Online Library, Wiley Online Library, and Google Scholar, supplementing the findings with a qualitative research approach. Analyzing this literature focused on digital transformation in sales, we find that the application and benefits of AI depend on the sales process step. For this reason, we conduct research on B2B sales process models, compare them, and choose a reference model for the evaluation of AI in B2B sales. Moreover, we present common definitions of AI and show how this technology is usually applied in B2B sales. Afterward, we combine the sales process with use cases of AI. For each step, we present use cases in detail and explain their benefits for sales. For instance, we find that especially tasks with traditionally high human involvement are challenging to automate. In particular, in complex sales situations, the human salesperson may not be entirely replaced by digital technologies, while routine tasks can be carried out with the help of digital technologies. Our paper closes with a discussion and conclusion. Summing up, the proposed paper analyzes different viewpoints of the sales process in the digital sales literature. We can conclude that the main focus of our paper will be presenting the application of AI along the sales cycle. Our research closes with a discussion and conclusion and gives recommendations for practice and academia.

Heiko Fischer, Sven Seidenstricker, Thomas Berger, Timo Holopainen
Open Access
Article
Conference Proceedings

Automatic Labeling of Human Actions by Skeleton Clustering and Fuzzy Similarity

Nowadays, human action recognition (HAR) has been applied in multiple fields with the rapid growth of artificial intelligence and machine learning. Applying HAR onto industrial production lines can help on visualizing and analyzing the correlation between human operators and machine utilization to improve overall productivity. However, to train HAR model, the manual labeling of certain actions in a large amount of the collected video data is required and very costly. How to label a large amount of video automatically is an emerging practical problem in HAR research domain. This research proposed an automatic labeling framework by integrating Dynamic Time Warping (DTW), human skeleton clustering, and Fuzzy similarity to assign the labels based on the pre-defined human actions. First, the skeleton estimation method such as OpenPose was used to jointly detect key points of the human operator’s skeleton. Then, the skeleton data was converted to spatial-temporal data for calculating the DTW distance between skeletons. The groups of human skeletons can be clustered based on DTW distance among skeletons. Within a group of skeletons, the undefined skeletons will be compared with the pre-defined skeletons, considered as the references, and the labels are assigned according to the similarity against the references. The experimental dataset was created by simulating the human actions of manual drilling operations. By comparing with the manual labeled data, the results show that all of accuracy, precision, recall, and F1 of the proposed labeling model can achieve up to 95% with 40% saving time.

Chao-Lung Yang, Shang-Che Hsu, Si-Hao Wang, Jing-Feng Nian
Open Access
Article
Conference Proceedings

Detecting Potential Depressed Users in Twitter Using a Fine-tuned DistilBERT Model

With the spread of Major Depressive Disorder, otherwise known simply as depression, around the world, various efforts have been made to combat it and to potentially reach out to those suffering from it. Part of those efforts includes the use of technology, such as machine learning models, to screen a potential person for depression through various means, including social media narratives, such as tweets from Twitter. Hence, this study aims to evaluate how well a pre-trained DistilBERT, a transformer model for natural language processing that was fine-tuned on a set of tweets coming from depressed and non-depressed users, can detect potential users in Twitter as having depression. Two models were built using the same procedure of preprocessing, splitting, tokenizing, training, fine-tuning, and optimizing. Both the Base Model (trained on CLPsych 2015 Dataset) and the Mixed Model (trained on the CLPsych 2015 Dataset and a half of the dataset of scraped tweets) could detect potential users in Twitter for depression more than half of the time by demonstrating an Area under the Receiver Operating Curve (AUC) score of 65% and 63%, respectively, when evaluated using the test dataset. These models performed comparably in identifying potential depressed users in Twitter given that there was no significant difference in their AUC scores when subjected to a z-test at 95% confidence interval and 0.05 level of significance (p = 0.21). These results suggest DistilBERT, when fine-tuned, may be used to detect potential users in Twitter for depression.

Miguel Antonio Adarlo, Marlene De Leon
Open Access
Article
Conference Proceedings

The Songbird and the Robotic Self-Awakening

The songbird sings a beautiful melody when there is no ecological need, and the imagination and curiosity are fueled for investigation with biological models of cognitive mechanisms of animal communication. Many animal sensory signals remain a mystery to the logical reasoning of science. Through the evolutionary game theory in ecological cognitive science, predictions are made regarding the signal cost, circumstances, and the individual agent’s state, about which signals (continuous or discrete) should be valued in certain circumstances, but not the details of signal design nor any clue as to why the signals are so diverse in form. In this, investigations have the what, when/where, but not the why. This is reflective of where the debate on robotic consciousness sits. A robot can be programmed to decide to carry out an action in an “if-then” case and use logical algorithms to ensure the calculations can be made to match the possibilities of situations, but to act randomly as an expression of feelings, emotions, passions, or just for the sake of the act, is beyond a calculation. It is the “why” of an existent consciousness, in the “just because” reasoning for the feeling, thought, emotion, passion, or compassion that occurred for the act to come to fruition. A sentient act from emotion or passion may not be a programmable option, as it comes from the identity and free will of the conscious self. The question to be discussed in this paper is whether robots could someday possess a level of consciousness and sentience, to match that of a living human being. This paper will investigate the position that robots will reach a level of sentience and consciousness through the intelligent learning systems of AI. There is strong support for the position that there is a way for electronic networks to become more like human neural networks. The nano and biotechnology grow and the understanding of the human physiology will increase, throughout the smallest of details with neurons, networks, and into the compatibility of neural with electronic systems. AI systems have begun to find support and integration with biotechnology with nanotechnology (West, 2000).

Valarie Yerdon
Open Access
Article
Conference Proceedings

Development of a platform based on artificial vision with SVM and KNN algorithms for the identification and classification of ceramic tiles

In the ceramic tile manufacturing industry, the quality of production achieved depends to a large extent on the quality of the tile, which is very important for its classification and price. Currently, this process is performed by human operators, but many industries aim to improve performance and production through automation of this process. In this work, we present the development of a platform based on an artificial vision that allows the identification of defects in ceramic tiles, so that we can classify them according to their quality. The algorithms chosen to develop the platform are Support Vector Machine (SVM) and K-Nearest Neighbor (KNN). In order to implement these algorithms, the images are preprocessed, the descriptors for defect detection are obtained, then the algorithms are used and the results obtained

Edisson Pugo-Mendez, Luis Serpa-Andrade
Open Access
Article
Conference Proceedings

Synchronization procedure for data collection in offline-online sessions

This article proposes a system for data replication and synchronization in mobile devices which is managed offline, allowing data collection in remote locations or deprived of internet connection. In this process there were shortcomings in the convergence and stability of the data, for which a synchronization procedure (web services) is used to assist it. As a result it was obtained that the synchronization between a database hosted in the cloud, a database hosted locally on the mobile device, the compatibility between different programming languages such as Django of Python as server, the deployment of Web Services and C# as client in the consumption of synchronization services is a success, carrying out a synchronization where the integrity of the data is not lost, enabling the connection of the devices in offline mode, performing the corresponding activities, to the time of having an internet connection to upload the data and keep them synchronized.

Andres Viscaino - Quito, Luis Serpa-Andrade
Open Access
Article
Conference Proceedings

Proposal for the Generation of Profiles using a Synthetic Database

The lack of data to perform various models that feed an artificial intelligence with which you can get or discover various patterns of behavior in a set of data. Therefore, due to this lack of data, the systems are not well nourished with data large enough to fulfill its learning function, presenting a synthetic database which is parameterized with restrictions on the characteristics of graphomotor and language elements, which develops a set of combinations that will be the model for the AI. As effect to all this gave a commensurable amount of 777,600 combinations at the moment of applying the first filter with the respective restrictions, when taking the valid combinations that are 77304 a second filter is applied with the remaining restrictions that gave 57,672 valid combinations for the generation of the synthetic database that will feed the AI. It is concluded that the generation of synthetic data helps to create, according to its importance, more or less similar to real data and in this way ensures a quantity and no dependence on real or original data.

Andres Viscaino - Quito, Luis Serpa-Andrade
Open Access
Article
Conference Proceedings

Improving Common Ground in Human-Machine Teaming: Dimensions, Gaps, and Priorities

“Common ground” is the knowledge, facts, beliefs, etc. that are shared between participants in some joint activity. Much of human conversation concerns “grounding,” or ensuring that some assertion is actually shared between participants. Even for highly trained tasks, such teammates executing a military mission, each participant devotes attention to contributing new assertions, making adjustments based on the statements of others, offering potential repairs to resolve potential discrepancies in the common ground and so forth.In conversational interactions between humans and machines (or “agents”), this activity to build and to maintain a common ground is typically one-sided and fixed. It is one-sided because the human must do almost all the work of creating substantive common ground in the interaction. It is fixed because the agent does not adapt its understanding to what the human knows, prefers, and expects. Instead, the human must adapt to the agent. These limitations create burdensome cognitive demand, result in frustration and distrust in automation, and make the notion of an agent “teammate” seem an ambition far from reachable in today’s state-of-art. We are seeking to enable agents to more fully partner in building and maintaining common ground as well as to enable them to adapt their understanding of a joint activity. While “common ground” is often called out as a gap in human-machine teaming, there is not an extant, detailed analysis of the components of common ground and a mapping of these components to specific classes of functions (what specific agent capabilities is required to achieve common ground?) and deficits (what kinds of errors may arise when the functions are insufficient for a particular component of the common ground?). In this paper, we provide such an analysis, focusing on the requirements for human-machine teaming in a military context where interactions are task-oriented and generally well-trained.Drawing on the literature of human communication, we identify the components of information included in common ground. We identify three main axes: the temporal dimension of common ground and personal and communal common ground. The analysis further subdivides these distinctions, differentiating between aspects of the common ground such as personal history between participants, norms and expectations based on those norms, and the extent to which actions taken by participants in a human-machine interaction context are “public” events or not. Within each dimension, we also provide examples of specific issues that may arise due to problems due to lack of common ground related to a specific dimension. The analysis thus defines, at a more granular level than existing analyses, how specific categories of deficits in shared knowledge or processing differences manifests in misalignment in shared understanding. The paper both identifies specific challenges and prioritizes them according to acuteness of need. In other words, not all of the gaps require immediate attention to improve human-machine interaction. Further, the solution to specific issues may sometimes depend on solutions to other issues. As a consequence, this analysis facilitates greater understanding of how to attack issues in misalignment in both the nearer- and longer-terms.

Robert Wray, James Kirk, Jeremiah Folsom-Kovarik
Open Access
Article
Conference Proceedings

Development of a virtual assistant chatbot based on Artificial Intelligence to control and supervise a process of 4 tanks which are interconnected

This article presents the gathering of works related to the usage of virtual assistants into the 4.0 industry in order to stablish the parameters and essential characteristics to define the creation of a ‘chatbot’ virtual assistant. This device should be applicable to a process of 4 tanks which are interconnected with a robust multivariable PID control with the aim of controlling and supervising this process using a mobile messaging application from a smartphone by sending key words in text messages which will be interpreted by the chatbot and this will be capable of acting depending on the message it receives; it can be either a consultation of the status of the process and the tanks which will be answered with a text message with the required information, or a command which will make it work starting or stopping the process. This system is proposed as a solution in the case of long-distance supervision and control during different processes. With this, an option to optimize the execution of actions such as security, speed, reliability of data, and resource maximization can be implemented, which leads to a better general performance of an industry

Sandro Gonzalez-Gonzalez, Luis Serpa-Andrade
Open Access
Article
Conference Proceedings

Pattern noise prediction using Artificial Neural Network

In early design stage of tire pattern, it is very useful to predict noise level associated with tire pattern. Artificial neural network (ANN) was used for development of the model for the prediction of tire pattern noise recently. The ANN used supervised training method which extracts the feature applying Gaussian curve fitting to the tread profile spectrum of tire pattern and used it as the input of ANN. This method requests laser scanning for tire pattern of a real tire. In early design, there is no real tire. In this study, the convolutional neural network (CNN) to predict tire pattern noise was developed based on non-supervised training method. Two Learning algorithms such as stochastic gradient descent (SGD) and RMSProp were studied in the CNN model for the comparison of their learning performance. RMSProp algorithm was suggested for the CNN model. In this case, a pattern image of a tire to be designed was used as the input of CNN. The CNN to predict tire pattern noise was developed and its utility in the early design stage of tire was discussed. In the study, pattern noise for 28 tires were measured in the anechoic chamber and their pattern images were scanned. For the training of ANN and CNN, pattern noise for 24 tires and their pattern images were used. The trained ANN and CNN were validated respectively with 4 tires which were not used for the training of two neural networks. Finally, two networks were successfully developed and validated for the prediction of tire pattern noise. The trained CNN can be used for the prediction of pattern noise for a tire to be designed in early design stage using the only drawing image of tire whilst ANN can be used for the prediction of pattern noise for a real tire in development stage.

Sang Kwon Lee
Open Access
Article
Conference Proceedings

Three-degree graph and design of an optimal routing algorithm

Learning, as well as the importance of a high-performance computer is significantly emerging. In parallel computing, we denote the concept of interconnection between the single memory and multiple processors as multi-processor. In a similar context, multi-computing signifies the connection of memory-loaded processors through the communication link. The relationship between the performance of multi-computing and the processor’s linkage structure is extremely proximate. Let the connection structure of the processor be an interconnection network. The interconnection network can be modeled through a classical graph consisting of node and edge. In this regard, a multi-computing processor is expressed as a node, communication link as an edge. When categorizing the suggested interconnection network through the criteria of the number of nodes, it can be classified as follows: Mesh class type consisted of the n×k nodes (Torus, Toroidal mesh, Diagonal mesh, Honeycomb mesh), Hypercube class type with 2^n nodes (Hypercube, Folded hypercube, Twisted cube, de Breijin), and Star graph class type (Star graph, Bubblesort star, Alternating group, Macro-star, Transposition) with n! nodes. The mesh type structure is a planar graph that is widely being utilized in the domains such as VLSI circuit design and base station installing (covering) problems in a mobile communication network. Mesh class types are comparatively easier to design and could potentially be implemented in algorithmic domains in a practical manner. Therefore, it is considered as a classical measure that is extensively preferred when designing a parallel computing network system. This study suggests the novel mesh structure De3 with the degree of three and designs an optimal routing algorithm as well as a parallel route algorithm (병렬경로알고리즘) based on the diameter analysis. The address of the node in the De3 graph is expressed with n-bit binary digits, and the edge is noted with the operator %. We built the interval function (구간 함수) that computes the locational property of the corresponding nodes to derive an optimal routing path from node u to node v among the De3 graph structure. We represent the optimal routing algorithm based on the interval function, calculating and validating the diameter of the De3 graph. Furthermore, we propose the algorithm that establishes the node disjoint parallel path which addresses a non-overlap path from node u to v. The outcome of this study proposes a novel interconnection network structure that is applicable in the routing algorithm optimization by limiting the communication links to three while the number of nodes These results implicate the viable operation among the high-performance edge computing system in a cost-efficient and effective manner.

Bo-Ok Seong, Jimin Ahn, Myeongjun Son, Hyeongok Lee
Open Access
Article
Conference Proceedings