Physically plausible surface appearance estimated using the proposed InexactSA net from a single image of a near-planer material under unknown natural lighting, and revisualized under a novel lighting condition.
This paper presents a deep learning based method for estimating the spatially varying surface reflectance properties from a single image of a planar surface under unknown natural lighting trained using only photographs of exemplar materials without referencing any artist generated or densely measured spatially varying surface reflectance training data. Our method is based on an empirical study of Li et al.'s self-augmentation training strategy that shows that the main role of the initial approximative network is to provide guidance on the inherent ambiguities in single image appearance estimation. Furthermore, our study indicates that this initial network can be inexact (i.e., trained from other data sources) as long as it resolves the inherent ambiguities. We show that the single image estimation network trained without manually labeled data outperforms prior work in terms of accuracy as well as generality.
Appearance Modeling, SVBRDF, CNN, Self-augmentation
Paper and video
Our implementation is based on TensorFlow, all the scripts for data generation and training are included in the released code.
The trained InexactSA net for real world materials.
Please visit our code page to find which files are needed for your purpose. The lighting maps and the real world material test data is rearranged from the released data of SANet . The input images are generated from OpenSurfaces .
 Wenjie Ye, Xiao Li, Yue Dong, Pieter Peers, Xin Tong. Single Image Surface Appearance Modeling with Self-augmented CNNs and Inexact Supervision. Computer Graphics Forum. 37, 7 (2018), 201-211
We would like to thank the reviewers for their constructive feedback. Pieter Peers was partially supported by NSF grant IIS-1350323 and gifts from Google, Activision, and Nvidia.