Quantifying CTGAN Training Dynamics: A Convergence-Aware Self-Adaptive Framework

Open Access
Article
Conference Proceedings
Authors: Andrea Gregores CotoAndrea Fernández MartínezRamon Angosto ArtiguesSantiago Muiños LandínJonathan Josue Torrez Herrera

Abstract: Training Generative Adversarial Networks (GANs) for tabular data remains challenging due to unstable convergence between generator and discriminator. Conditional Tabular GANs (CTGANs) are particularly sensitive to hyperparameter configurations, where inadequate tuning often leads to mode collapse or degraded data fidelity. Despite existing optimization strategies, convergence is typically assessed qualitatively rather than through explicit quantitative criteria. This work introduces a convergence metric that formally characterizes adversarial training dynamics in CTGANs. The metric evaluates curve validity, stability, and decrement behavior to detect balanced generator–discriminator dynamics. It is integrated into a self-adaptive Bayesian hyperparameter optimization framework, where convergence quality and statistical data fidelity are jointly maximized through a composite objective function. The approach is validated on benchmark datasets and on a high- dimensional industrial dataset from the aluminum sector representing its potential in material science applications. Results show improved training stability and synthetic data robustness, while adaptive search space refinement reduces computational cost. The proposed methodology enables systematic, convergence- aware CTGAN training in complex real-world scenarios.

Keywords: Convergence Metric, Self-Adaptive Hyperparameter Optimization, Bayesian Optimization, Conditional Tabular GAN, Adversarial Training Dynamics, Synthetic Tabular Data

DOI: 10.54941/ahfe1007219

Cite this paper:

Downloads
0
Visits
2
Download