Shape and Illumination from Shading using the Generic Viewpoint Assumption
Authors: Daniel Zoran, Dilip Krishnan, José Bento, Bill Freeman
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use the GVA algorithm to estimate shape and illumination from synthetic, grayscale shading images, rendered using 18 different models from the MIT/Berkeley intrinsic images dataset [3] and 7 models from the Harvard dataset in [30]. Each of these models is rendered using several different light sources: the MIT models are lit with a natural light dataset which comes with each model, and we use 2 lights from the lab dataset in order to light the models from [30], resulting in 32 different images. We compare to the SIFS algorithm of [3] which is a subset of their algorithm that does not estimate albedo. We use their publicly released code. We use the same error measures as in [3]. The error for the normals is measured using Median Angular Error (MAE) in radians. For the light, we take the resulting light coefficients and render a sphere lit by this light. The GVA term helps significantly in estimation results. |
| Researcher Affiliation | Academia | Daniel Zoran CSAIL, MIT danielz@mit.edu Dilip Krishnan CSAIL, MIT dilipkay@mit.edu Jose Bento Boston College jose.bento@bc.edu William T. Freeman CSAIL, MIT billf@mit.edu |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | Yes | We will make our code publicly available at http://dilipkay.wordpress.com/sfs/ |
| Open Datasets | Yes | We use the GVA algorithm to estimate shape and illumination from synthetic, grayscale shading images, rendered using 18 different models from the MIT/Berkeley intrinsic images dataset [3] and 7 models from the Harvard dataset in [30]. |
| Dataset Splits | No | The paper refers to using datasets and evaluating performance but does not specify explicit train/validation/test splits (e.g., percentages, sample counts, or cross-validation setup). |
| Hardware Specification | No | The paper mentions running times and that the code is 'unoptimized MATLAB code,' but it does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used for the experiments. |
| Software Dependencies | No | The paper mentions that the code is 'unoptimized MATLAB code' but does not specify any version numbers for MATLAB or any other software libraries or dependencies used. |
| Experiment Setup | Yes | λimg and λGVA are hyper-parameters which we set to 2 and 1 respectively for all experiments. We initialize with an all zeros depth (corresponding to a flat surface) and the light is initialized to the mean light from the natural dataset in [3]. We perform the estimation in multiple scales using V-sweeps solving at a coarse scale, upscaling, solving at a finer scale then downsampling the result, repeating the process 3 times. The same parameter settings were used in all cases. |