Construction of a PointNet-based Autoencoder Using a 3D Scene Dataset for Feature Extraction from Indoor Space Point Clouds Excluding Interior Details

Open Access
Article
Conference Proceedings
Authors: Takahiro MikiYusuke OsawaKeiichi Watanuki

Abstract: In this study, to automatically construct a virtual space with a high degree of freedom of expression that reflects the spatial shape of the real space and the arrangement of objects, we focused on the global shape of the indoor space without interior details as the first step and constructed a PointNet-based autoencoder to extract the features of the shape. To train the machine learning model, we used ScanNet++, which is a 3D indoor space dataset converted into point cloud data. Feature extraction was performed using two types of point cloud data: (1) point cloud data not used in the training of ScanNet++, and (2) point cloud data of an indoor space obtained through 3D scanning of a real environment. Feature extraction was evaluated by comparing the shapes of the input point cloud, restored output point cloud and distance error. As a result, both the ScanNet++ data and the indoor space data were output as rectangular shapes, and the general shapes of the walls and floors of the indoor space were generally consistent, indicating that spatial features were extracted. However, the interior furniture and other objects were removed. To investigate the applicability of the model, feature extraction was performed using 3D objects with elliptical shapes in an interior space. In future work, we will investigate the development of an autoencoder that performs feature extraction by focusing on the local shape around each point using a point-cloud convolution method, along with feature extraction following region classification within the interior space.

Keywords: 3D Point Clouds, Feature extraction, PointNet, Autoencoder, ScanNet++, Virtual Space

DOI: 10.54941/ahfe1006066

Cite this paper:

Downloads
11
Visits
34
Download