Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Disentangled Information Bottleneck
Authors: Ziqi Pan, Li Niu, Jianfu Zhang, Liqing Zhang9285-9293
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experimental results, we justify our theoretical statements and show that Disen IB performs well in terms of generalization (Shamir, Sabato, and Tishby 2010), robustness to adversarial attack (Alemi et al. 2017) and outof-distribution data detection (Alemi, Fischer, and Dillon 2018), and supervised disentangling. |
| Researcher Affiliation | Academia | Mo E Key Lab of Arti๏ฌcial Intelligence, Department of Computer Science and Engineering Shanghai Jiao Tong University, Shanghai, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes its method in prose and mathematical equations but does not include any distinct pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing open-source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate our method in terms of generalization (Shamir, Sabato, and Tishby 2010), robustness to adversarial attack (Alemi et al. 2017) and outof-distribution data detection (Alemi, Fischer, and Dillon 2018) on benchmark datasets: MNIST (Le Cun et al. 1998), Fashion MNIST (Xiao, Rasul, and Vollgraf 2017), and CIFAR10 (Krizhevsky, Hinton et al. 2009). We also provide results on more challenging natural image datasets: object-centric Tiny-Image Net (Deng et al. 2009) and scenecentric SUN-RGBD (Song, Lichtenberg, and Xiao 2015). We also study the disentangling behavior of our method on MNIST (Le Cun et al. 1998), Sprites (Reed et al. 2015) and d Sprites (Matthey et al. 2017). |
| Dataset Splits | No | The paper mentions 'training set' and 'test set' but does not specify the use of a distinct validation set or provide details on how the dataset splits (e.g., percentages or exact counts) were created for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU specifications, or memory. |
| Software Dependencies | No | The paper mentions implementing methods using neural networks and refers to various related works which may use specific software, but it does not list any specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | No | The paper states, 'Due to space limitation, the implementation details can be found in supplementary,' indicating that specific experimental setup details like hyperparameters are not included in the main text. |