Compressing Latent Space via Least Volume
Authors: Qiuyi Chen, Mark Fuge
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the intuition behind the regularization on some pedagogical toy problems, and its effectiveness on several benchmark problems, including MNIST, CIFAR-10 and Celeb A. |
| Researcher Affiliation | Academia | Qiuyi Chen & Mark Fuge Department of Mechanical Engineering University of Maryland, College Park {qchen88,fuge}@umd.edu |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | We make the code public on Git Hub1 to ensure reproducibility. 1https://github.com/IDEALLab/Least_Volume_ICLR2024 |
| Open Datasets | Yes | the MNIST dataset (Deng, 2012) and the CIFAR-10 dataset (Krizhevsky et al., 2014)... Celeb A dataset (Liu et al., 2015) |
| Dataset Splits | No | The paper mentions "three cross validations" but does not specify the validation dataset splits (e.g., percentages or counts) or reference predefined splits that include validation. Table B.3 only lists "Training Set Size" and "Test Set Size". |
| Hardware Specification | Yes | All experiments are performed on NVIDIA A100 SXM GPU 80GB. |
| Software Dependencies | No | The paper mentions software components like "Torchvision", "Adam" optimizer, and activation functions, but does not specify any version numbers for these or other key software dependencies (e.g., Python, PyTorch). |
| Experiment Setup | Yes | The hyperparameters are listed in Table B.2 [for toy problems] and Table B.5 [for image datasets]. ... Batch Size, λ, η, Learning Rate, Epochs. |