Rapid personalized doppelgänger avatar generation: Dyadic evaluation of the TAC-Twin virtual human pipeline.

Open Access
Article
Conference Proceedings
Authors: Sharon MozgaiEd FastAndrew LeedsEdwin SookiassianKevin KimArno Hartholt

Abstract: TAC-Twin (Hartholt et al., 2013; Hartholt et al., 2025) is a rapid, modular framework for generating personalized doppelgänger avatars from a single photograph using commercially available software integrated within the Virtual Human Toolkit (VHToolkit) (Hartholt et al., 2022). Developed at the USC Institute for Creative Technologies (ICT), TAC-Twin extends prior work on virtual human system architectures that support sensing, automated speech recognition, natural language processing, nonverbal behavior generation, and text-to-speech synthesis. The framework uses Reallusion Character Creator and Headshot to produce high-fidelity 3D avatars and deploys them via Unity and the RIDE platform, which provides scalable simulation and interoperability with multiple AI services (Hartholt et al., 202; Mozgai et al.,2023; Mozgai et al., 2024). In its present configuration, TAC-Twin generates a fully rigged, testbed-ready avatar in roughly 20 minutes, enabling rapid iteration without specialized 3D modeling expertise.We conducted an exploratory mixed-methods evaluation to characterize early perceptions of usability, realism, and workflow effectiveness. Twenty participants (ten dyads), all affiliated with the USC ICT, completed the study. The convenience sample was intentionally composed of domain-relevant experts, researchers, engineers, and technical staff working with virtual human pipelines, Unity, and Unreal Engine. Eighteen participants reported moderate-to-high fluency with real-time 3D tools, making this group well-positioned to identify workflow bottlenecks and subtle perceptual artifacts that novice users might miss. Dyadic participation reflected typical workplace relationships and enabled naturalistic comparison of self- and partner-based avatars.Each dyad completed a five-phase protocol: standardized photo capture; automated avatar generation; live pipeline demonstration; repeated viewing of a controlled Unity-based combat scenari; and post-interaction questionnaires with open-ended items. The scenario was designed to hold narrative, timing, and camera structure constant while embedding three avatar identities, Self, Partner, and Generic, so that observed differences could be attributed primarily to avatar identity. Avatar order was randomized within a within-subjects design.Participants rated TAC-Twin as efficient and intuitive: 80% agreed that avatar creation required little effort, and 85% reported that manual refinement improved facial resemblance. Realism and engagement followed a consistent gradient, with self avatars rated highest, followed by partner and generic avatars. Qualitative analysis indicated that likeness was generally satisfactory, but behavioral expressivity, including micro-expressions, gaze timing, and post-impact reactions, remained a key limitation. Participants also reported habituation across repeated exposures to the identical scenario, underscoring the need for narrative and emotional variation when evaluating avatar-based systems.Workflow-focused feedback highlighted TAC-Twin’s strengths as a modular, repeatable pipeline, while noting reliance on expert intervention for facial refinement and engine integration. Methodological takeaways include the importance of balancing fidelity and scalability, anticipating affective flattening in repeated-exposure designs, and using principled APIs to manage trade-offs across sensing, language, and speech technologies.In summary, TAC-Twin offers a practical open-source pathway for rapidly generating personalized virtual humans using production-ready tools and extensible system architecture. The exploratory dyadic evaluation provides early evidence of feasibility and yields methodological guidance for researchers deploying doppelgänger avatars in health, training, and human–AI interaction contexts.

Keywords: Virtual humans, Avatar generation pipelines, Systems architecture, Behavioral realism

DOI: 10.54941/ahfe1007194

Cite this paper:

Downloads
0
Visits
1
Download