Conditioned to Interact: A Computational Simulation of Pavlovian-Instrumental Transfer in Intelligent System Design

Open Access
Article
Conference Proceedings
Authors: Julie RaderAncuta MargondaiSara WilloxSoraya HaniNikita IslamValentina EzcurraMustapha Mouloua

Abstract: Intelligent systems increasingly leverage behavioral conditioning mechanisms to guide human engagement, yet the systematic effects of these mechanisms on human-AI interaction remain underexplored through computational modeling. This study presents a novel agent-based simulation framework that models Pavlovian-Instrumental Transfer (PIT) dynamics in human-intelligent system interactions, providing insights into how conditioned cues shape user behavior and offering design principles for ethical intelligent system development. Pavlovian-Instrumental Transfer occurs when conditioned cues, such as notification icons, vibrations, or interface animations, enhance the probability of operant behaviors by associating them with rewards. While PIT has been extensively documented in psychology and neuroscience, its systematic role in shaping human interactions with intelligent systems remains fragmented, and its potential for constructive applications in human-centered AI design has received limited scholarly attention.This study employs a computational approach grounded in established reinforcement learning theory to develop a multi-agent simulation platform. Individual users are modeled as learning agents with empirically derived parameters for learning rates, reward sensitivity, and Pavlovian bias, based on validated human PIT research. The simulation incorporates multiple intelligent system scenarios, including adaptive learning platforms, autonomous vehicle interfaces, smart home systems, and safety-critical dashboards, each implementing different conditioning paradigms and transparency levels.The simulation framework replicates established human PIT effects before extending to intelligent system contexts, ensuring empirical validity. Key manipulations include cue modalities (visual, auditory, haptic), reinforcement schedules (fixed versus variable ratio), and novel parameters relevant to intelligent system design, such as conditioning transparency and user autonomy controls.The framework identifies three principles for ethical intelligent system design: conditional transparency (making conditioning mechanisms visible without eliminating their beneficial effects), adaptive autonomy (implementing user control mechanisms for personal conditioning parameters), and context-sensitive scheduling (aligning reinforcement patterns with user goals rather than system metrics).This computational approach offers significant methodological innovations for research on integrating human-intelligent systems. By enabling ethical testing of potentially problematic conditioning scenarios before real-world deployment, the simulation provides a pathway for developing beneficial human-AI interactions. The framework demonstrates how behavioral science principles can inform intelligent system design while maintaining user well-being and autonomy. The study contributes to intelligent systems research by providing the first systematic computational model of behavioral conditioning in human-AI interaction, establishing empirically grounded design guidelines for ethical conditioning implementation, and offering a scalable methodology for testing human-centered AI principles before deployment. This work positions behavioral conditioning as both a diagnostic tool for identifying exploitative AI practices and a constructive framework for developing next-generation human-intelligent system integration that enhances human capabilities while preserving agency in an increasingly automated world.

Keywords: Pavlovian-instrumental transfer, Human-computer interaction, Intelligent systems, Computational modeling, and Behavioral conditioning.

DOI: 10.54941/ahfe1007096

Cite this paper:

Downloads
1
Visits
2
Download