Continual Unsupervised Disentangling of Self-Organizing Representations
Authors: Zhiyuan Li, Xiajun Jiang, Ryan Missel, Prashnna Kumar Gyawali, Nilesh Kumar, Linwei Wang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We tested the presented method on a split version of 3DShapes to provide the quantitative disentanglement evaluation of continually learned representations, and further demonstrated its ability to continually disentangle new representations and improve shared downstream tasks in benchmark datasets. |
| Researcher Affiliation | Academia | Zhiyuan Li1, Xiajun Jiang1, Ryan Missel1, Prashnna Kumar Gyawali2, Nilesh Kumar1, Linwei Wang1 Rochester Institute of Technology1, Stanford University2 |
| Pseudocode | No | The paper does not contain explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/Zhiyuan1991/CUDOS_release. |
| Open Datasets | Yes | We evaluated CUDOS on (1) a split version of 3DShapes (Burgess & Kim, 2018)... (2) MNIST(Le Cun et al., 1998), Fashion-MNIST(Xiao et al., 2017), and their moving versions in (Achille et al., 2018), and (3) split-Celeb A (Liu et al., 2015). |
| Dataset Splits | No | The paper states that it uses 'a split version of 3DShapes' and other datasets, but it does not provide specific train/validation/test split percentages, sample counts, or references to predefined splits for reproduction. |
| Hardware Specification | No | The paper does not explicitly mention the specific hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper provides hyperparameters but does not list specific software dependencies with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | We set γ1 = 0.25, γ2 = 1, γ3 = 0.35, b = 10 in all experiments. Snapshot of the model is updated every τ = 1500 iteration steps. |