Estimating Product Attributes Relevant to Purchase Decisions from Images in C2C Marketplaces
Open Access
Article
Conference Proceedings
Authors: Kohei Otake, Yoshihisa Shinozawa
Abstract: In recent years, consumer-to-consumer (C2C) online flea markets, which are platforms where individuals buy and sell goods directly, have grown rapidly. Prior studies suggest that consumer behavior on C2C platforms differs from that on business-to-consumer platforms, prompting research that leverages multimodal information, such as images and text. Among these modalities, image analysis plays a key role in revealing visual cues that influence purchase decisions. Manually annotated labels are often used to ensure interpretability; however, large-scale annotation is costly and labor intensive, limiting scalability. This study addresses this issue by developing deep-learning models that automatically estimate the product attributes that affect purchase decisions. We analyzed the product images of tops from a fast-fashion brand posted on a company-operated C2C platform. Using thumbnail images, we built models to predict five visual attributes: (1) Packaged, (2) Folded, (3) Characters, (4) Official Website Image, and (5) Size. Four architectures, namely ResNet, EfficientNet, ConvNeXt, and Swin Transformer, were compared in terms of accuracy. All classification tasks achieved an accuracy of over 90%, with the best-performing model varying by attribute. These results demonstrate that deep-learning-based automatic annotation can effectively reduce labeling costs and support scalable consumer behavior research on C2C platforms.
Keywords: Consumer-to-Consumer Platform, Deep Learning, Automatic Annotation
DOI: 10.54941/ahfe1006937
Cite this paper:
Downloads
12
Visits
45


AHFE Open Access