GPS-Net: Graph-based Photometric Stereo Network

Authors: Zhuokun Yao, Kun Li, Ying Fu, Haofeng Hu, Boxin Shi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the real-world benchmark show that our method achieves excellent performance under both sparse and dense lighting distributions. 5 Experimental Results
Researcher Affiliation Academia 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China 3School of Precision Instrument and Opto-Electronics Engineering, Tianjin University, Tianjin, China 4Department of Computer Science and Technology, Peking University, Beijing, China 5Institute for Artificial Intelligence, Peking University, Beijing, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it state that the code is publicly available.
Open Datasets Yes For the training, we use the synthetic photometric stereo dataset made by Chen et al. [7], which renders shapes from the Blobby shape dataset [33] and the Sculpture shape dataset [34] with the MERL BRDF dataset [35].
Dataset Splits Yes This dataset contains 85212 samples rendered under 64 directional lightings, which are randomly split into 99:1 for training and validation.
Hardware Specification Yes We train our model using a batch size of 32 for 30 epochs, which takes about 10 hours using a single Ge Force GTX 1080 Ti GPU.
Software Dependencies No Our framework is implemented in Tensor Flow. The paper mentions TensorFlow but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes We train our model using a batch size of 32 for 30 epochs, which takes about 10 hours using a single Ge Force GTX 1080 Ti GPU. Images in the training dataset are randomly cropped and scaled to 32 32 to increase the training speed, and the testing is performed at the original resolution of input images. The learning rate is initially set to 0.01 and halved every 3 epochs. Adam optimizer is used to optimize our network with default parameters (β1 = 0.9 and β2 = 0.999).