Relevant sparse codes with variational information bottleneck

Authors: Matthew Chalk, Olivier Marre, Gasper Tkacik

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To show this, we constructed artificial 9x9 image patches from combinations of orientated bar features. Each bar had a gaussian cross-section, with maximum amplitude drawn from a standard normal distribution of width 1.2 pixels. Patches were constructed by linearly combining 3 bars, with uniformly random orientation and position. Initially, we considered a simple de-noising task, where the input, X, was a noisy version of the original image patches (gaussian noise, with variance sigma^2 = 0.005; figure 1A). Training data consisted of 10,000 patches.
Researcher Affiliation Academia Matthew Chalk IST Austria Am Campus 1 A 3400 Klosterneuburg, Austria; Olivier Marre Institut de la Vision 17, Rue Moreau 75012, Paris, France; Gasper Tkacik IST Austria Am Campus 1 A 3400 Klosterneuburg, Austria
Pseudocode No The paper describes iterative updates and algorithmic steps within the main text and equations (e.g., 'To maximize L with respect to parameters Θ...', 'Setting the derivative to zero, we can solve for W directly.'), but it does not include explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code for the methodology or provide a link to a code repository.
Open Datasets Yes Finally, we repeated the occlusion task with handwritten digits, taken from the USPS dataset (www.gaussianprocess.org/gpml/data).
Dataset Splits No The paper mentions 'Training data consisted of 10,000 patches' and 'We used 4649 training and 4649 test patches, of 16x16 pixels' but does not specify a separate validation split or explicit training/validation/test percentages.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not list any software dependencies with specific version numbers (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup No The paper describes the datasets and tasks (e.g., 'Training data consisted of 10,000 patches'), but it does not specify concrete experimental setup details such as hyperparameter values (e.g., learning rate, batch size), optimizer settings, or training schedules.