State Representation Learning Using an Unbalanced Atlas
Authors: Li Meng, Morten Goodwin, Anis Yazidi, Paal E. Engelstad
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface (Atari ARI) benchmark, a modified version of the Atari 2600 framework that produces annotated image samples for representation learning. The UA paradigm improves existing algorithms significantly as the number of target encoding dimensions grows. For instance, the mean F1 score averaged over categories of DIM-UA is 75% compared to 70% of ST-DIM when using 16384 hidden units. |
| Researcher Affiliation | Academia | Li Meng University of Oslo Oslo, Norway li.meng@its.uio.no Morten Goodwin University of Agder Kristiansand, Norway morten.goodwin@uia.no Anis Yazidi Oslo Metropolitan University Oslo, Norway anisy@oslomet.no Paal Engelstad University of Oslo Oslo, Norway paal.engelstad@its.uio.no |
| Pseudocode | Yes | Pytorch-style pseudocode of the DIM-UA algorithm is provided in Algorithm 1. |
| Open Source Code | Yes | 1Code is available at https://github.com/mengli11235/DIM-UA. |
| Open Datasets | Yes | The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface (Atari ARI) benchmark, a modified version of the Atari 2600 framework that produces annotated image samples for representation learning. We modify Sim CLR using the UA paradigm (Sim CLR-UA) and perform additional experiments on CIFAR10, following the parameter settings and evaluation protocol from Korman (2021b); Chen et al. (2020). |
| Dataset Splits | No | The paper specifies pretraining and probe training steps but does not explicitly state the dataset splits for training, validation, or testing (e.g., percentages or specific sample counts for each split). |
| Hardware Specification | Yes | The experiments are conducted on a single Nvidia Ge Force RTX 2080 Ti and 8-core CPU, using Py Torch-1.7 (Paszke et al., 2019). |
| Software Dependencies | Yes | The experiments are conducted on a single Nvidia Ge Force RTX 2080 Ti and 8-core CPU, using Py Torch-1.7 (Paszke et al., 2019). |
| Experiment Setup | Yes | Table 3: The values of hyper-parameters on Atari ARI. Hyper-parameter: Image size Value: 160 x 210. Minibatch size: 64. Learning rate: 3e-4. Epochs: 100. Pretraining steps: 80000. Probe training steps: 35000. Probe testing steps: 10000. In addition, τ in Eq. 9 is set to 0.1 for DIM-UA. |