Caveats for information bottleneck in deterministic scenarios
Authors: Artemy Kolchinsky, Brendan D. Tracey, Steven Van Kuyk
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the three caveats on the MNIST dataset. |
| Researcher Affiliation | Academia | Artemy Kolchinsky & Brendan D. Tracey Santa Fe Institute Santa Fe, NM 87501, USA {artemyk,tracey.brendan}@gmail.com Steven Van Kuyk School of Engineering and Computer Science Victoria University of Wellington, New Zealand steven.jvk@gmail.com Dept of Aeronautics & Astronautics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA |
| Pseudocode | No | The paper describes the neural network architecture and training process in text, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Tensor Flow code can be found at https://github.com/artemyk/ibcurve . |
| Open Datasets | Yes | We demonstrate the three caveats using the MNIST dataset of hand-written digits. ... This dataset contains a training set of 60,000 images and a test set of 10,000 images, each labeled according to digit. |
| Dataset Splits | Yes | This dataset contains a training set of 60,000 images and a test set of 10,000 images, each labeled according to digit. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions "Tensor Flow code" and the "Adam algorithm", but does not provide specific version numbers for TensorFlow or any other software libraries or dependencies. |
| Experiment Setup | Yes | The neural network was trained using the Adam algorithm (Kingma & Ba, 2014) with a mini-batch size of 128 and a learning rate of 10 4. ... Training was run for 200 epochs. |