Human Factors in Software and Systems Engineering

book-cover

Editors: Tareq Ahram

Topics: Systems Engineering

Publication Date: 2023

ISBN: 978-1-958651-70-4

DOI: 10.54941/ahfe1003763

Articles

Explaining algorithmic decisions: design guidelines for explanations in User Interfaces

Artificial Intelligence (AI)-based decision support is becoming a growing issue in manufacturing and logistics. Users of AI-based systems have the claim to understand the decisions made by the systems. In addition, users like workers or managers, but also works councils in companies, demand transparency in the use of AI. Given this background, AI research faces the challenge of making the decisions of algorithmic systems explainable. Algorithms, especially in the field of AI, but also classical algorithms do not provide an explanation for their decision. To generate such explanations, new algorithms have been designed to explain the decisions of the other algorithms post hoc. This subfield is called explainable artificial intelligence (XAI). Methods like local interpretable model-agnostic explanations (LIME), shapley additive explanations (SHAP) or layer-wise relevance propagation (LRP) can be applied. LIME is an algorithm that can explain the predictions of any classifier by learning an interpretable model around the prediction locally. In the case of image recognition, for example, a LIME algorithm can highlight the image areas based on which the algorithm arrived at its decision. They even show that the algorithm can also come to a result based on the image caption. SHAP, a game theoretic approach that can be applied to the output of any machine learning model, connects optimal credit allocation with local explanations. It uses Shapley values as in game theory for the allocation. In the research of XAI, explanatory user interfaces and user interactions have hardly been studied. One of the most crucial factors to make a model understandable through explanations is the involvement of users in XAI. Human-computer interaction skills are needed in addition to technical expertise. According to Miller and Molnar, good explanations should be designed contrastively to explain why event A happened instead of another event B, rather than just emphasizing why event A occurred. In addition, it is important that explanations are limited to only one or two causes and are thus formulated selectively. In literature, four guidelines to be respected for explanations are formulated: use a natural language, use various methods to explain, adapt to mental models of users and be responsive, so a user can ask follow-up questions. The explanations are often very mathematical and a deep knowledge of details is needed to understand the explanations. In this paper, we present design guidelines to help make explanations of algorithms understandable and user-friendly. We use the example of AI-based algorithmic scheduling in logistics and show the importance of a comprehensive user interface in explaining decisions. In our use case, AI-based shift scheduling in logistics, where workers are assigned to workplaces based on their preferences, we designed a user interface to support transparency as well as explainability of the underlying algorithm and then evaluated it with various users and two different user interfaces. We show excerpts from the user interface and our explanations for the users and give recommendations for the creation of explanations in user interfaces.

Charlotte Haid, Alicia Lang, Johannes Fottner
Open Access
Article
Conference Proceedings

Value-driven architecture enabling new interaction models in Society 5.0

Industries are struggling to deliver the information and insights required for top performance. They also need to invest in developing new knowledge to create a foundation of trusted data necessary for the cognitive business. However more change is coming, as we envisioned Society 5.0, new interaction models will be generated, enabling a move to connected industry ecosystems supported by value-driven architecture akin to the next generation of society-centric internet. Society 5.0 trends will cause a shift from the output-based business model focused on buy/sell/own for-profit interaction to the impact-based model. The new model will be a personalized and purpose-led service involving ecosystem participants from multiple industries and drive higher incomes for participants and additional business while decreasing the cost of acquiring customers. Trust and human centricity of that model will lead to advancements in :· Ethics, Impact & Purpose - open, trusted, peer-endorsed services and products.· Decentralization of Power - more loosely coupled ecosystems where ecosystem leaders release more power to participants to fuel the “network” effect.· Data Democratization - bring your data, data used for social and sustainable innovation.· Connected Cyber/Physical Society -the instrumentation of the physical world with IoT and Edge Computing.· New data sources and standards will combine existing data sets with new ones to set the foundation for contextual computing and highly adaptive cyber-physical systems for many industries· Resiliency by design - a guiding design principle that is not only a technology requirement but also a business imperative that will create opportunities for new entities like “Group Formed Networks” based on shared interests. In this paper, we focus on a solution tackling Society 5.0 problems based on a globally scalable platform that is trusted to preserve individuals’ and businesses’ privacy and confidentiality while using the data to create value alongside social and individual good—simply providing value while maintaining values. We will describe innovative architecture at both a societal and a technical level, resting on a logical framework in which several technology components interact to provide value to society. We address the industry dynamics of Society 5.0, tracking new business trends and drivers influencing social infrastructure, societal engagement, cohesion, and new value creation. We define the building blocks required to support a Society 5 ecosystem solution in the future, in alignment with new business models to promote economic development and solve social issues.

Elizabeth Koumpan, Anna Topol
Open Access
Article
Conference Proceedings

The Removal of Irrelevant Human Factors in a Multi-Review Corpus through Text Filtering

Generating a high-quality explainable summary of a multi-review corpus can help people save time in reading the reviews. With natural language processing and text clustering, people can generate both abstractive and extractive summaries on a corpus containing up to 967 product reviews (Moody et al. 2022). However, the overall quality of the summaries needs further improvement. Noticing that online reviews in the corpus come from a diverse population, we take an approach of removing irrelevant human factors through pre-processing. Apply available pre-trained models together with reference based and reference free metrics, we filter out noise in each review automatically prior to summary generation. Our computational experiments evident that one may significantly improve the overall quality of an explainable summary from such a pre-processed corpus than from the original one. It is suggested of applying available high-quality pre-trained tools to filter noises rather than start from scratch. Although this work is on the specific multi-review corpus, the methods and conclusions should be helpful for generating summaries for other multi-review corpora.

Aaron Moody, Makenzie Spurling, Chenyi Hu
Open Access
Article
Conference Proceedings

Accounting trustworthiness requirements in Service Systems Engineering

Trustworthy services are essential for sustainable digitization, especially during ever more demanding times when it comes to service expectations, quality and consumption. Putting the human being behind the service consumer at the centre of service systems engineering also means identifying which requirements are beneficial for the actual usage of the service. Previous research has shown that the degree of trustworthiness of a digital system is directly related to the potential use of the same. Yet, there is a lack of empirical data on the perception of trustworthiness of different systems and approaches on how to integrate these aspects as requirements in disciplines such as Service Engineering in academia. This paper aims to provide insights into a recent empirical study that collected views on the importance of trustworthiness attributes of various digital services and how these results can potentially be integrated into Service Systems Engineering via a Requirements Engineering approach. The base is a multidisciplinary view of trustworthiness in sociotechnical systems. In this expert survey, the perspectives are manifested by trusted Wifi, conventional WebAPIs, AI WebAPIs, and mediation services. Thus, differences in social, technical and socio-technical, as well as with AI-enabled services will also be highlighted. Utilizing the basic approaches of Trust Engineering, Trust determinants are incorporated and empirically evaluated in this study and give new insights into trustworthy Systems Engineering as well as relevant and potentially counter-intuitive features.

Steven Schmidt, Sandro Hartenstein
Open Access
Article
Conference Proceedings

Analysis of the behavior of the floating systems used for boundary of river-sea recreational activities area

The network of river courses that cross the territory of Romania has a total length of 118,000 km, to which is added the aquatic part of the Romanian coast (the Black Sea and the harbors), with an area of 39,940 km2. From this territory the UNESCO Biosphere Reserve (the shore of the Delta Danube and the Razim Sinoe complex) cover an area of over 6000 ha. Under these conditions, tourism that includes recreational activities (if the environment allows it) can be extended within the arranged natural aquatic reserves, mainly including the swimming. The natural areas for swimming are protected areas to avoid possible risks of pollution and are specially arranged, respectively: for depths up to 1.50 m, the slope is uniform and the inclination respects the ratio of 1:10 - 1:15, while, for greater depths, the inclination of the slope does not exceed the ratio of 1:3. Considering the strict restrictions imposed on the recreation area, the swimming areas must be strictly and visibly delimited from the areas where other recreational activities are carried out (mooring of charter ships, practicing water sports, etc.) with possible health risks. Additionally, special attention is paid to the delimitation of the bathing area with a depth of less than 0.70 m, for children and people who do not swim for various reasons. In these particular situations, to avoid possible accidents, the delimitation and marking is performed with the help of floating systems made of composite material based on woven structure matrix. For the purpose of ensure the delimitation and signalling of the maritime and fluvial areas, for a depth of 3 m, the researches were focused on the digital development of a flexible composite structure. The geometric, dimensional and structural elements of the composite architecture were predicted based on FEM modeling and were calculated for a solid body in the form of a right circular cylinder. The design and developed flexible structure was experimented at the shore and the main conclusions leaded to the following: i) the composite material behaved appropriately, and, during the monitoring period of 72 hours, no potentially interventions caused by damages were needed; ii) no damage of the textile material or change in geometrical of the solid shape was recorded. The experiments carried out in open sea conditions required the consideration of the specific features of a continuously moving surface (due to sea waves and currents), with large temperature variations and difficult weather conditions. During the experimental trials, the floating systems were placed at Lat. 43.985 Lon. 28.607 Altitude 50m and the meteorological observation were made at every 24 hours within a period of 10 days, during May when dangerous phenomena, such as descending gusts (white squalls, formed as a result of the rise of water in the atmosphere with the cumulonimbus clouds development) and nebulousness were recorded. Regular inspections were performed and the appropriate behavior of the composite material used as core part of the floating marking/signalling systems was registered.

Alexandra Gabriela Ene, Carmen Mihai, Mihaela Jomir, Constantin Jomir
Open Access
Article
Conference Proceedings

A Data retrieval Model for Distributed Heterogeneous Pharmacy Information Sources

The need for sharing data in various domains has increased significantly over the past decades and has become the focus of many theoretical works as numerous data-related problems remain unsolved. Hospitals exemplify this notion given that they are complex institutions with constantly evolving patient-related services and ever-growing data stored on heterogeneous data sources. The purpose of this research is to solve the patients’ issue in checking drug availability in pharmacies in their vicinity, specifically for pharmacies in Saudi Arabia.A qualitative study was conducted to obtain a comprehensive view from two hospitals in Riyadh, KSA about their HIS implementation and the integration approaches used. To address the data integration challenges faced by these hospitals, a data retrieval model to integrate data from heterogeneous sources has been developed and tested.Various reasons affect the successful implementation and adoption of HIS. The main reason for the lack of HIS adoption in Saudi Arabia is due to the lack of expertise in systems integration and weak integration planning and architecting. This thesis looked at integration approaches and found that there is no single optimal integration approach for solving complex integration issues. A combination of multiple integration approaches should be utilized to leverage the advantages of the various approaches. One of the main components of the HIS is the Pharmacy Information system (PIS) which is responsible for storing and managing medication-related data, however, PISs in pharmacies are considered heterogeneous and not integrated, therefore users cannot conduct searches for medication availability across multiple pharmacies. A data Retrieval Model has been designed to integrate heterogeneous data sources and has been validated by implementing a mockup E-Pharmacy mobile application that helps the user search for medications in pharmacies in Saudi Arabia. This data retrieval model can be applied in many fields and benefit various organizations in their data integration initiatives.

Suad Alramouni, Reem Hassounah
Open Access
Article
Conference Proceedings

Short-time taxi demand prediction based on Transformer-LSTM in integrated transportation hub

Taxis, as an important part of the comprehensive transportation passenger flow connection, is one of the main tools for passengers in an integrated transportation hubs due to their better accessibility and convenience. Due to the impact of passenger travel choices and the frequent collection and distribution of large passenger flows in comprehensive transportation hubs, the demand for taxis fluctuates greatly. In addition, there are many other uncertainties in the process of taxi transfer because of the unreasonable scheduling of taxis, such as long transfer time, passengers stranded at the taxi stand, etc. Thus will cause delay and time waste of passengers. In order to improve the productivity of the hub operation in passenger flow distribution and serve the dynamic decision-making of taxi drivers, it is very necessary to predict the demand for taxis in the hub in time. Based on artificial intelligence deep learning method, this paper wants to build a short-term taxi demand forecasting model, which can assist taxi drivers to make decisions and choose a reasonable time to wait go to the taxi storage yard, so as to match the taxi demand of passengers with the supply of taxis perfectly and reduce the waiting time. By learning to fully mine the time-series characteristics of the historical data of taxis flow, the model integrates the Transformer and the LSTM neural network for the short-term prediction of demand of taxis every 15 minutes. Then taking the Shanghai Hongqiao transportation hub as an example, the experiment collected 3months of taxi cross-section traffic data to train the model. The results shows that the trained Transformer-LSTM model has a high accuracy in predicting short-term taxi demand. In order to verify the superiority of the model, the designed model is compared with other prediction models, such as CNN, LSTM. The experimental results show that the comprehensive performance of the Transformer-LSTM model has the highest accuracy. The model can be used to provide a forecast service for passengers' demand for taxis in transportation hubs, and provide a powerful reference for the optimization of taxi dispatching.

Wenjuan Zhang, Xiujie Li, Bin Zhang, Haozhe Yang, Guangbin Wang
Open Access
Article
Conference Proceedings

Hackathon-based software development: Lessons learned from an internal corporate hackathon

This article discusses the qualitative evaluation of the results of a corporate internal hackathon, detailing its design, execution, and results . The article begins by noting the factors motivating the decision to perform an internal hackathon. Then the article describes the way the hackathon was structured to fit the corporate environment, the method followed to attack and solve the problem, as well as the outcome of the project undertaken and the effects on the team that participated in it. The article also examines the reasons behind the team’s attendance to the hackathon, and the intangible rewards that the team members reported.The results of the evaluation are that there is value in using the hackathon method for the development of new solutions, as well as for the integration of those solutions into the corporation’s existing software offerings. Another result of note is that several intangible rewards were expressed by the software developers. Examples of these intangible benefits included personal growth in the context of software development and strengthening of the team bond, thereby helping the team work more efficiently, with better communication amongst its members. Finally, the article proposes a software design and implementation methodology which suits the development done during a hackathon.

Georgios Christou
Open Access
Article
Conference Proceedings

Improving Internet Advertising Using Click – Through Rate Prediction

Online advertising is a billion-dollar industry, with many companies choosing online websites and various social media platforms to promote their products. The primary concerns in online marketing are to optimize the performance of a digital advert, reach the right audience, and maximize revenue, which can be achieved by predicting the accurate probability of a given ad being clicked, called the Click-Through Rate. It is assumed that a high CTR depicts the ad reaching its target customers while a low CTR shows that it is not reaching its desired audience, which may constitute a low return on investment (ROI). We propose a data-science-driven approach to help businesses improve their internet advertising campaigns which involves building various machine learning models to accurately predict the CTR and selecting the best-performing model. To build our classification models, we use the Avazu dataset, publicly available on the Kaggle website. Having insights on this metric will allow companies to compete in real-time bidding, gauge how relevant their keywords are in search engine querying, and mitigate an unexpected loss in spending budget. The authors in this paper strive to use modern machine learning tools and techniques to improve the performance of predicting Click-Through Rate (CTR) in online advertisements and bring change to the industry.

Rakesh Gudipudi, Sandra Nguyen, Doina Bein, Sudarshan Kurwadkar
Open Access
Article
Conference Proceedings

Crowdsourcing for Second Language Learning

This work implements language acquisition and crowdsourcing techniques in a unique combination to aid with second language learning. It allows users to contribute linguistic assets, while other users vote on the quality of the assets. The system’s goal is to provide a social platform for learning and contributing to underrepresented languages. The authors establish quality attributes for the system, namely: usability, scalability, security, and portability. The resulting system is tested against these quality attributes using quality scenarios and usability testing. The implemented system is shown to possess the quality of security, scalability, and portability. Usability testing highlights the importance of user interface for crowdsourcing systems and shows possible interface improvements.

Abdelrahman Abounegm, Nursultan Askarbekuly, Magomed Magomedov, Manuel Mazzara
Open Access
Article
Conference Proceedings

Evaluating embedded semantics for accessibility description of web crawl data

The Web is ever expanding, even more by the need for content consumption derived from the pandemic. This fact highlights the need for equity in access to Web content by all people, regardless of their disabilities. To this end, it is essential to focus on web accessibility issues. The World Wide Web Consortium (W3C), the leading organization responsible for ensuring the growth of the social value of the Web, establishes standards, protocols, and recommendations to improve the reach extent of web content for people. For instance, Web Content Accessibility Guidelines (WCAG) promote the achievement of web accessibility. Furthermore, other W3C recommendations foster embedded semantic into the web content to help browsers build a machine-readable data structure aiming to produce an enriched description in search results supporting people to find the right content for their queries and, consequently, improving user experience. Searching for specific web content is especially striving for people with disabilities because they could be forced to explore many search results before finding some content that matches their accessibility requirements. If embedded semantic communicate the accessibility properties of the content, the search will be more productive for everyone but even more for people with special needs. For embedded semantic, two components are required, a vocabulary and an encoding format. Schema.org vocabulary has experienced high growth and encompasses plenty of descriptors for each type of web information, including the set of descriptors for accessibility conditions information. Regarding the format, JSON-LD is the latest W3C recommendation for encoding due to its ability to make JSON data interoperate at Web-scale. It provides a quickly transforming for Linked Data format and is simple enough to be read and written by people. This research conducts a quantitative analysis of the embedded semantic into the web content by processing a dataset obtained from millions of web crawl data for 2021. The data arrive from distinct provenance and purposes at a global scale. In this web content, each annotation is made through script JSON-LD of embedded semantic with Schema's vocabulary. The analysis defines how the accessibility descriptors are used in conjunction with other classes and properties to describe the web information on personal blogs, organizations, events, educational content, universities, persons, commerce, sports, medicine, entertainment, and more. The results provide a perspective of the awareness for accessibility in the different purposes of the Web.The processing was performed on collected zip files that contain over three hundred million records. This analysis was conducted using massive data analysis techniques such as key-value modeling with Python for processing and a NoSQL database such as MongoDB for storage. A new dataset with normalized data was generated with information about domains, types of web content, and properties associated with the accessibility descriptor. The collection and storage layers were implemented on a computing platform with 30GB of RAM, 10 CPUs, and 2TB of storage.This research delivers two main contributions. Firstly, the analysis of the interest in the Web for using accessibility descriptors in embedded semantic. The quantitative results enable us to appreciate the concern about equity and inclusion made visible through accessibility issues in different entities, according to the web domains. Moreover, these results reveal how the W3C recommendation of embedded semantic is being adopted to create a more organized and better-documented Web. Second, processing the raw dataset result in a new normalized dataset in JSON format with information about domains, web content types, and properties associated with the accessibility descriptor. This new dataset will be available for further analysis of the embedded semantic.

Rosa Navarrete, Diana Martinez- Mosquera, Lorena Recalde, Marco Aguirre
Open Access
Article
Conference Proceedings

ETL and ML Forecasting Modeling Process Automation System

Given the importance of online retailers in the market, forecasting sales has become one of the essential market strategic considerations. Modern Machine Learning tools help in forecasting sales for many online retailers. These models need refinement and automatization to increase efficiency and productivity. Suppose an automated function can be applied to capture historical data and execute forecasting models automatically; it will reduce the time and human resources for the company to manage the forecasting system. An automated data processing and forecasting model system offers the marketing department more flexible market sales forecasting. Proposed here is an automated weekly periodic sales forecasting system that integrates: the Extract-Transform-Load (ETL) data processing process and machine learning forecasting model and sends the outcomes as messages. For this study, the data is obtained for an online women's shoe retailer from three data sources (AWS Redshift, AWS S3, and Google Sheets). The system collects the sales data for 120 weeks, then passes it to an ETL process, and runs the machine learning forecasting model to forecast the sales of the retailer's products in the next week. The machine learning model is built using the random forest regressor. The top 25 products with the most popular forecasting results are selected and sent to the owner’s email for further market evaluation. The system is built as a Directed Acyclic Graph (DAG) using Python script on Apache Airflow. To facilitate the management of the system, the authors set up Apache Airflow in a Docker container. The whole process does not require human monitoring and management. If the project is executed on Airflow, it will notify the project owner to inspect the cause of any potential error.

Jennifer Wu, Doina Bein, Jidong Huang, Sudarshan Kurwadkar
Open Access
Article
Conference Proceedings

Design of Library Management System Based on MVVM Framework and ZXing Scanning Code Technology

The library is an important resource for university learning. Nowadays, the library has gradually become the place where students learn most in universities at home and abroad. Therefore, our team will design an interdisciplinary practice in the library of Huazhong University of Science and Technology. The survey found that the libraries of Huazhong University of Science and Technology and other universities achieve self-service management through self-service machines, public numbers, official websites, and other channels, but students generally think that the function is cumbersome and inconvenient to query, so we want to design a book self-service management system to bring the greatest convenience to users. This practice uses literature analysis, questionnaires, interviews, and experiments to deeply understand the pain points of library self-service machines, the design principles of relevant interfaces of libraries at home and abroad, and the needs of users to determine the information construction, interactive experience and technical needs of the system. This project is mainly an interdisciplinary practice of computer science and industrial design. In terms of technology, the system uses Android and MVVM architecture to display the front-end interface, realizes the Android network request through OkHttp and Retrofit, and uses ZXing open-source scanning technology to realize the function of borrowing and returning books on the palm; Go language and Echo Web framework is used for development, and Docker is used to deploying containers. In terms of design, the system combines ergonomics, design psychology, and so on, which not only realizes the basic functions of mobile phones, such as scanning code, borrowing, and returning books, map guide, and searching books but also adds special functions such as lost and found, recording reading time and so on, so as to enhance the user's personalized experience. After the usability test, the interviewees believe that the design can greatly improve the learning efficiency in the library.

Yuqi Li, Hongyu Zhang, Jinhui Xu
Open Access
Article
Conference Proceedings

Communication of the linguistic awareness of Ecuadorian students, through a web system

The idea to defend of the present investigation is the design of a web system, for the learning of Ecuadorian students. What is proposed is to reduce the number of students who have difficulties to read and write, because the teacher is not able to provide attention to all his students, since there are several grades and several students at the same time. This tool, which was designed, is made so that the student knows how to recognize sounds, form sentences and express meanings of words. This contributes in a beneficial way to the teaching-learning process, it will focus on the word generator method created by Paulo Freire, which is used for adult literacy, and is also currently used in many educational institutions in Latin America. The present work is carried out due to the incidence of students, who have problems or difficulties in reading and writing, so this delays their level of learning. The development of a web system for the development of linguistic awareness will allow students to learn through the source word method, making use of an element that today is part of our day to day: technology. This adaptation to Paulo Freire's method allows the same procedure to be carried out from a computer, entering from the web. With this implementation, students will be able to recognize sounds, write words and form sentences correctly. This project is structurally organized. Initially, a theoretical framework will be reviewed, where we will have the relevant concepts of the research topic; then the methodology of the process, which details the type of research of the project; and then a proposal, where we respond by giving a solution to the problem. The interest in making an educational tool for a multigrade institution arises as a professional need to provide easy access to pedagogical tools that help in this teaching process, since there is a large percentage of this type of institution in different countries, especially in Latin America.

Josué Clery, Isabel Posligua, Edison Cruz, Martha Suntaxi, Roberto Palacios, Susana Molina, Gonzalo Vera, Arturo Clery
Open Access
Article
Conference Proceedings

Using Fuzzy Theory to Analyze Delivery Platforms Using Foodpanda as an Example

With the development of e-commerce in recent years and the changing habits of consumers, delivery platforms have emerged, and businesses in food, clothing, housing and transportation are gradually collaborating with platforms to provide services that meet the needs of today's consumers. Many delivery platforms have emerged in response to this trend, and many food and beverage businesses have partnered with delivery platforms to expand their market size. Therefore, a fuzzy theory approach is used to help Foodpanda analyze user satisfaction to help companies make decisions in the future.

Shuo-Fang Liu, Chien-Ting Wu
Open Access
Article
Conference Proceedings

A Formal Method for the Analysis of the Veteran’s Ebenefits’ Website

Currently, the Ebenefits/VA.gov website has login problems. This is a serious situation because it prevents veterans from accessing critical services: applying for medical disabilities; enrolling in health care services; accessing educational benefits; managing current VA benefits; acquiring home and auto loans, life insurance, burial services; and connecting veteran networks for other community resources. When a service member is unable to access this website, they are unable to help themselves. This adds inefficiencies to the overall VA system and could lead to poor veteran integration to civilian life. In this analysis, we used a technology called probabilistic model checking (a formal method for proving properties about stochastic systems) to identify the optimal process for veterans to login to Ebenefits while adhering to safety constraints for protecting veterans’ sensitive information. To perform this analysis, a veteran in our research group documented multiple login attempts to gain realistic probabilities of the system transitioning between different interface states. Probabilistic model checking was used to quantify the probability of successfully getting from initial states to a successful login. In analyzing these results, the probability of a successful login for Ebenefits was found to be 0.25. Reviewing the data produced by the model checker revealed that a particular state called two factor authentication, utilized to verify the veteran’s identity by a password and passcode sent to a technological device in their possession, was a problematic state. Our analyses also showed that one particular path, the Defense Self-Service Logon Path, was the most successful pathway at 0.98. This pathway begins with the user entering their password, verification of this password, followed up with a second authentication which the end user can skip or chose to add to their cellular device prior to allowing them to have a successful login. Based on this path, we found that use of a Common Access Card was most effective for enabling logins. Further work utilizing formal methods for exposing problems in the Ebenefits website will be explored in the full paper.

Giovanna Camacho, Matthew Bolton, Jingan Peng, Prashanth Wagle, Lu Feng
Open Access
Article
Conference Proceedings

The Customer Experience of Energy Services

Climate protection and the limited availability of conventional energy sources have led to efforts to facilitate a transition to renewable sources. This trend also changes the way in which electricity is consumed and distributed: Recently, end-users have taken an increasingly active role in the electrical power system that enables a collective form of energy self-consumption and sharing - so-called ‘energy communities’ [1]. In these communities, energy is generated with solar or wind technologies and distributed between members using local grids and community battery storages.The diffusion of energy communities on a large scale could provide advantages such as increasing customers’ electricity savings, electricity suppliers' sales, and grid operators' revenues due to reduced grid tariffs for inner-community electricity transfer [2]. A barrier to a large-scale rollout is the fact that energy often remains invisible to most citizens and is merely perceived in terms of ‘energy services’ ("[…] functions performed using energy which are means to obtain or facilitate desired end services or states" [3]). This focus on energy services can give rise to a wide range of information needs but also to different attitudes in the evaluation of energy communities from the perspective of potential customers. Therefore, it is necessary to analyze whether companies address such requirements in order to establish a positive customer experience.In this study, the topic is operationalized through three research questions:Communication from the company's point of view: How are energy communities advertised by companies that support customers in implementing them? Information needs from the customer's perspective: What do potential customers want to know about energy communities?The questions are examined in a comparative analysis based on text mining methods. For this purpose, data were collected from two types of sources: Comments from social media addressing energy communities and promotional in which companies communicate energy communities to potential customers. Both data sets were analyzed with regard to the research questions.The results show a mismatch between what customers want to know about energy communities and what companies communicate about such forms of energy production and distribution. In particular, risks perceived by potential customers (such as concerns about the equitable distribution of energy) are hardly addressed. By resolving such mismatches, the diffusion of energy communities could be accelerated. The results are discussed in terms of possible measures to enhance the customer experience.References[1] Iazzolino, G./Sorrentino, N./Menniti, D./Pinnarelli, A./De Carolis, M./ Mendicino, L. (2022). Energy communities and key features emerged from business models review.Energy Policy, 165. https://doi.org/10.1016/j.enpol.2022.112929.[2] Fina, B./Monsberger, C./Auer, H. (2022). A framework to estimate the large-scale impacts of energy community roll-out. Heliyon, 8 (7). https://doi.org/10.1016/j.heliyon.2022.e09905.[3] Fell, M. J. (2017). Energy services: A conceptual review. Energy Research & Social Science, 27, 129–140. https://doi.org/10.1016/j.erss.2017.02.010

Claas Digmayer, Nina Rußkamp, Eva-Maria Jakobs
Open Access
Article
Conference Proceedings

Human Factors for Advanced Reactors

Existing light water reactors in the U.S. are primarily large baseload electricity generating facilities. The concept of operations for these plants remains largely unchanged since the advent of commercial nuclear power—the main control room serves as the hub of plant activities and is staffed with multiple licensed operators who work in tandem under the shift supervisor, and staff such as field workers support the control room remotely. While newer plants have brought the advent of digital human-machine interfaces to replace earlier analog and mechanical instrumentation and controls, much of the control process remains unchanged and manual. It is simply a newer version of legacy concepts. Advanced reactors potentially bring considerable changes to the size, fuel type, automation, and staffing of nuclear power plants, necessitating a fundamental shift not just from analog to digital, but further from human to automation, from onsite to remote, from control to monitoring, and from many to few operators. Despite this multitude of parallel evolutions in reactor designs, many of the vendors developing the next generation of reactors represent smaller research and development enterprises. It is therefore not feasible to address all aspects of plant design at the same time. In particular, the competing design aspects of new reactors present a significant challenge to the development of robust and human factored systems at the plant. As vendors develop new reactor designs, much of the early focus is naturally on the fuel and reactor system technology. Looming behind these early advances is the daunting prospect of first-of-a-kind control concepts that have not yet been developed or validated. A failure to address the human element of reactor design early will lead to missed opportunities. The quickest development process is the replication of existing concepts of operations at legacy plants, even when such systems were long ago surpassed by better human-machine technologies outside the nuclear industry. Conversely, attempting to undertake novel concepts of operations late in the design life cycle of a plant could result in protracted development efforts and delays in licensing and deployment. This does not have to happen, and it is imperative that human factors be considered now, early in the design of new reactors.

Ronald Boring
Open Access
Article
Conference Proceedings

Energy Cooperatives as Energy Transition Actors

Germany is pursuing the ambitious goal of climate neutrality by 2045. In fact, the expansion of renewable energies is taking place far too slowly. There are several reasons for that. Project approval and planning processes are too lengthy and time-consuming; they are also often complicated by citizen protests against wind farms and ground-mounted photovoltaic parks. Various studies show that many protests are the result of perceived conflicts. The nature and extent of the conflicts vary, e.g., depending on the type of conflict, the stakeholders, technology, and local context. Overall, studies show that communication is a crucial factor in the success of infra-structure projects, e.g., as a means of conflict management.Many studies about energy infrastructure projects look at projects that are implement-ed by companies. This study changes the perspective on the ongoing transformation process to renewable energies. The focus is not on companies that need to interest and attract citizens to renewable energy projects but on citizens who join citizen energy cooperatives (CECs) and become local entrepreneurs themselves. The study aims to provide statements on how CECs function, what communication tasks they must master, and what challenges are in communicating internally with members and externally with stakeholders. This includes a deeper understanding of the perspectives for CECs regarding current legislation and planned business models, as well as conflicts that have arisen and are emerging.For the study, CECs were researched online, and executives (n=12) were asked for an interview. Before the in-depth interview, they were asked to complete a questionnaire where they provided information about their cooperative. The literature-based interview guideline addresses three topic areas: (1) CECs' expertise and scope of tasks; (2) communication tasks of the CEC; and (3) perceived risks and conflicts. The interviews were conducted in 2022. The data were anonymized, transcribed, and analyzed qualitatively (interview) and quantitatively (preliminary questionnaire).Results: The respondents understand communication as an essential and success-relevant part of the implementation of CEC energy projects. In recent years, the communication effort for CECs has steadily increased. The most important communication areas are project planning and acquisition, public relations, and member communication. Most respondents are convinced that local acceptance of energy projects is higher when CECs implement projects compared to companies since those responsible are themselves members of the community and therefore share local needs, desires, and problems of the community actors. To remain competitive, the CECs surveyed are expanding their portfolio structure, e.g., by communication-intensive models such as tenant electricity. CECs are developing many formats or pursuing novel approaches. Companies would be well advised to learn from this wealth of ideas from CECs, e.g., in public relations work and addressing new target groups.

Nils Hellmuth, Eva-Maria Jakobs
Open Access
Article
Conference Proceedings

Optimization of turbine generator through vibration damping for maximum service life in power plants

Power plant condition monitoring data is essential in identifying unscheduled maintenance needs. The data obtained from monitoring the condition of a power plant over numerous years of operation indicates that the primary reason for the failure of turbo generators due to vibrations is the misalignment of the turbine centreline. It is crucial to identify problems with steam turbines to prevent load losses and boost the operational reliability of a turbo generator. This paper presents the vibrational characteristics of a 500 MW turbo generator and the performance boost attained through optimized turbine maintenance. Shaft relative vibrations were analyzed at run-up at 500 rpm with no load and at 3000 rpm with approximately 420 MW. The study found that the highest absolute pedestal vibration levels were reduced by 8.5% as a result of maintenance optimization.

Steven Vusmuzi Mashego, Timothy Laseinde
Open Access
Article
Conference Proceedings

A Case study on the implementation of maintenance strategies at an energy generation facility

This paper examines a case study involving the implementation of preventative maintenance strategies gleaned from a coal power plant's turbo generator. Turbo generators are an essential and fundamental part of the power generation process. Applying the right maintenance strategies and following the right operational procedures are crucial. This is done so that the machinery can be reliable. The research was conducted at a coal-fired power plant with an eye toward the eventual switch to cleaner forms of energy production. This research is essential because it will help inform plans and operational models when renewable energy sources like solar and wind power replace coal-powered turbines. Firstly, this paper describes various approaches to turbo generator module maintenance. Secondly, recommendations for future actions are presented after discussing the effects of human factors on a coal power station.

Timothy Laseinde, Steven Vusmuzi Mashego
Open Access
Article
Conference Proceedings

Building Information Modeling approach for Design and Operation of Electrical Substations Integrated with Geographic Intelligence Systems (GIS)

In the context of the Brazilian electrical sector, there are no references regarding the application of BIM (Building Information Modeling) and GIS (Geographic Information System) in construction and maintenance of Electric Power Substations. Thus, this work proposes the integration of these technologies, since previous experiences in other engineering fields have shown promising advances that could be useful for the management and maintenance of the electrical power market. By associating these technologies, it is possible to obtain a more accurate mapping of the information related to the assets, arrangements, cabling, electronic components, etc. Moreover, in order to have this integration working properly, it is required to also provide a three-dimensional geometric database of the entire set of the electric power substation active components. In fact, the insertion of one model into a particular point of the substation project allows constructive, operational and maintenance information. Therefore, by combining BIM and GIS in the modeled families, it is possible to obtain more consistent information during the construction or maintenance phase. This will provide advantages in decision making, resources within the corporate communication and a better understanding of the environment related to an electrical energy substation. Additionally, the location conditions and the surroundings of the substation would be more precise and pertinent since the components of the substation will become geo-referenced. The association between these two platforms allows a more intuitive overview of the project, making them adherent to the planning, design, construction, operation, preventive and corrective maintenance. So, when applying these tools together, the company will obtain results almost immediately, since all managing features will be accessed through only one integrated information database. This proposal presents the very first results of the integration of BIM and GIS, in the context of a Brazilian electric company - Furnas Energy. Implementation results of the solution in the context of substations of the company are discussed and shows the availability of reducing construction/maintenance costs, alteration planning, logistics, prevention of possible accidents and also the possibility of updating information in real time.

Alexandre Cardoso, Gerson Lima, Edgard Afonso Lamounier, Andre Luis De Araujo, Arnaldo Rosentino, Ana Cristina De Freitas Marotti, Ricardo Oliveira Rocha
Open Access
Article
Conference Proceedings