Intrinsic Image Decomposition by Pursuing Reflectance Image

Authors: Tzu-Heng Lin, Pengxiao Wang, Yizhou Wang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that our proposed network outperforms current state-of-the-art results by a large margin on the most challenging real-world IIW dataset. We also surprisingly find that on the densely labeled datasets (MIT and MPI-Sintel), our network can also achieve state-of-the-art results on both reflectance and shading images, when we only apply supervision on the reflectance images during training.
Researcher Affiliation Academia Tzu-Heng Lin1 , Pengxiao Wang1 and Yizhou Wang1,2 1 School of Computer Science, Peking University 2 Center on Frontiers of Computing Studies, Peking University
Pseudocode No The paper describes network components and their functionality but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Code available in https://github.com/lzhbrian/RDNet
Open Datasets Yes Sparsely Labeled Dataset IIW dataset. The real-world IIW dataset [Bell et al., 2014] contains 872,161 pairwise reflectance comparisons across 5,230 photos. Densely Labeled Datasets MPI-Sintel dataset. The MPI-Sintel dataset [Butler et al., 2012] contains 8950 images from 18 scene level computer generated images sequences. MIT dataset. The MIT dataset [Grosse et al., 2009] contains 20 object level images, each with 11 different lighting conditions.
Dataset Splits No The paper mentions using training and test sets but does not explicitly provide details for a validation split (e.g., percentages or sample counts).
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for experiments are mentioned in the paper.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch versions) needed to replicate the experiment.
Experiment Setup No The paper describes the loss functions used for training but does not provide specific experimental setup details such as hyperparameters (e.g., learning rate, batch size, optimizer type, number of epochs).