Visual Instance Retrieval for Cultural Heritage Artifacts using Feature Pyramid Network
Authors: Luepol Pipanmekaporn, Suwatchai Kamonsantiroj
Abstract: Digitized photographs are commonly employed by archaeologists to assist in uncovering ancient artefacts. However, locating a specific image within a vast collection remains a significant obstacle. The metadata associated with images is often sparse, marking keyword-based searches difficult. In this paper, we propose a new visual search method to improve retrieval performance by utilizing visual descriptors generated from a feature pyramid network. This network is a convolutional neural network (CNN) model that incorporates additional modules for feature extraction and enhancement. The first module encodes an image into regional features through spatial pyramid pooling, while the second module emphasizes distinctive spatial features. Additionally, we introduce a two-stage feature attention to enhance feature quality and a compact descriptor is then formed by aggregating these features for searching the image. We tested our proposed method on benchmark datasets and a public vast collection of Thailand’s ancient artefacts. Results from our experiments show that the proposed method achieves 77.9% of mean average precision, which outperforms existing CNN-based visual descriptors.
Keywords: Landmark retreival, Image retreival, Deep Learning
Cite this paper: