Learning Sensor Multiplexing Design through Back-propagation

Authors: Ayan Chakrabarti

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We measure performance in terms of the reconstruction PSNR of all non-overlapping 64 64 patches from all test images (roughly 40,000 patches). Table 1 compares the median PSNR values across all patches for reconstructions using our network to those from traditional methods, at two noise levels low noise corresponding to an STD of 0.0025, and moderate noise corresponding to 0.01.
Researcher Affiliation Academia Ayan Chakrabarti Toyota Technological Institute at Chicago 6045 S. Kenwood Ave., Chicago, IL ayanc@ttic.edu
Pseudocode No The paper describes the algorithms and network architectures in narrative text and through a diagram (Figure 1), but it does not include any formal pseudocode blocks or sections labeled 'Algorithm'.
Open Source Code Yes An implementation of our method, along with trained models, data, and results, is available at our project page at http://www.ttic.edu/chakrabarti/learncfa/.
Open Datasets Yes Like [4], we use the Gehler-Shi database [6, 21] that consists of 568 color images of indoor and outdoor scenes, captured under various illuminants.
Dataset Splits Yes Unlike [4] who only used 10 images for evaluation, we use the entire dataset using 56 images for testing, 461 images for training, and the remaining 51 images as a validation set to fix hyper-parameters.
Hardware Specification Yes We thank NVIDIA corporation for the donation of a Titan X GPU used in this research.
Software Dependencies No The paper mentions general techniques and components like 'neural networks', 'back-propagation', 'Soft-max layer', and 'convolution layers' but does not specify any particular software libraries, frameworks (e.g., TensorFlow, PyTorch), or their version numbers.
Experiment Setup Yes We learn a repeating pattern with P = 8. In our reconstruction network, we set the number of proposals K for each output intensity to 24, and the number of convolutional layer outputs F in the second path of our network to 128. When learning our sensor multiplexing pattern, we increase the scalar soft-max factor αt in (1) according to a quadratic schedule as αt = 1 + (γt)2, where γ = 2.5 10 5 in our experiments. We use a batch-size of 128, with a learning rate of 0.001 for 1.5 million iterations.