Non-Lambertian Multispectral Photometric Stereo via Spectral Reflectance Decomposition
Authors: Jipeng Lv, Heng Guo, Guanying Chen, Jinxiu Liang, Boxin Shi
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic and real-world scenes demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | 1Peking University 2Beijing University of Posts and Telecommunications 3Osaka University 4The Chinese University of Hong Kong, Shenzhen |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | To train our Neural MPS, we build a synthetic training dataset with shapes coming from the Blobby dataset [Johnson and Adelson, 2011] as well as the Sculpture dataset [Wiles and Zisserman, 2017]. Each shape in the dataset is rendered with 51 measured isotropic spectral BRDFs [Dupuy and Jakob, 2018]. |
| Dataset Splits | No | The paper describes using a synthetic training dataset and a separate test dataset but does not explicitly provide specific dataset split information for training, validation, and testing (e.g., percentages, sample counts, or explicit mention of a validation set) needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions that the SNE module is re-trained following the training strategy of PS-FCN [Chen et al., 2018] and defines the loss function, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other detailed training configuration in the main text of this paper. |