Disentanglement Analysis with Partial Information Decomposition
Authors: Seiya Tokui, Issei Sato
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We develop an experimental protocol to assess how increasingly entangled representations are evaluated with each metric and confirm that the proposed metric correctly responds to entanglement. Through experiments on variational autoencoders, we find that models with similar disentanglement scores have a variety of characteristics in entanglement, for each of which a distinct strategy may be required to obtain a disentangled representation. |
| Researcher Affiliation | Academia | Seiya Tokui The University of Tokyo tokui@g.ecc.u-tokyo.ac.jp Issei Sato The University of Tokyo sato@g.ecc.u-tokyo.ac.jp |
| Pseudocode | No | The paper describes methods and calculations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the methodology described. |
| Open Datasets | Yes | We used the DSPRITES and 3DSHAPES datasets for our analysis. |
| Dataset Splits | No | The paper mentions using DSPRITES and 3DSHAPES datasets and how models were trained, but it does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit split methodologies). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running experiments. |
| Software Dependencies | No | The paper mentions using Adam for optimization but does not provide specific version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | The hyperparameters used in training are listed in Table 3 and Table 4. For hyperparameters not listed in the tables, we used the values suggested in the original papers. |