潘則佑
戴翊竹
Harmonizing the style of all the furniture placed within a constrained space/scene is an important principle for interior design. In this paper, we propose a furniture style compatibility recommendation approach for users to create a harmonic 3D virtual scene based on 2D furniture photos. Most previous works of 3D model style analysis measure the style similarity or compatibility based on predefined geometric features extracted from 3D models. However,"style" is a high-level semantic concept, which is difficult to be described explicitly by hand-crafted geometric features. Moreover, analyzing the style compatibility between two or more furniture belonging to different classes (e.g., table and lamp) is much more challenging since the given furniture may have very distinctive structures or geometric elements. Recently, deep neural network has been claimed to have more powerful ability to mimic the perception of human visual cortex, and therefore we propose to analyze style compatibility between 3D furniture models of different classes based on a Cross-Class Triplet Convolutional Neural Network (CNN). We conducted experiments based on a collected dataset containing 420 textured 3D furniture models. A group of raters were recruited from Amazon Mechanical Turk (AMT) to evaluate the comparative suitability of paired models within the dataset. The experimental results reveal that the proposed furniture style compatibility method based on deep learning performs better than the state-of-the-art method and can be used to efficiently generate harmonic virtual scenes.
@inproceedings{pan2017deep, title={Deep model style: Cross-class style compatibility for 3D furniture within a scene}, author={Pan, Tse-Yu and Dai, Yi-Zhu and Tsai, Wan-Lun and Hu, Min-Chun}, booktitle={2017 IEEE International Conference on Big Data}, pages={4307--4313}, year={2017}, organization={IEEE} }