Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Efficient Distribution Matching of Representations via Noise-Injected Deep InfoMax
Authors: Ivan Butakov, Alexander Semenenko, Alexander Tolmachev, Andrey Gladkov, Marina Munkhoeva, Alexey Frolov
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate our distribution matching approach on several datasets and downstream tasks. To assess the quality of the embeddings, we solve downstream classification tasks and calculate clustering scores. To explore the relation between the magnitude of injected noise and the quality of DM, a set of statistical normality tests is employed. For the experiments requiring numerous evaluations or visualization, we use MNIST handwritten digits dataset Le Cun et al. (2010). For other experiments, we use CIFAR-10, CIFAR-100 datasets (Krizhevsky, 2009) and Image Net (Russakovsky et al., 2015). |
| Researcher Affiliation | Collaboration | 1Skolkovo Institute of Science and Technology; 2Moscow Institute of Physics and Technology; 3Artificial Intelligence Research Institute; EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and proofs but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code is available on Git Hub repository. |
| Open Datasets | Yes | For the experiments requiring numerous evaluations or visualization, we use MNIST handwritten digits dataset Le Cun et al. (2010). For other experiments, we use CIFAR-10, CIFAR-100 datasets (Krizhevsky, 2009) and Image Net (Russakovsky et al., 2015). |
| Dataset Splits | No | The paper mentions using standard benchmark datasets and following 'standardized protocol for augmentation and training' for linear probing, but it does not explicitly provide specific train/test/validation split percentages, sample counts, or citations to predefined splits within the text. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper mentions optimizers like 'Adam optimizer (Kingma & Ba, 2017)' and 'LARS optimizer (You et al., 2017)' but does not specify software libraries (e.g., PyTorch, TensorFlow) with version numbers. It also refers to methods and architectures like ResNet-18, SimCLR, and VICReg, but these are not software dependencies with specific versions for the experimental environment. |
| Experiment Setup | Yes | Training hyperparameters are as follows: batch size = 1024, 2000 epochs, Adam optimizer (Kingma & Ba, 2017) with learning rate 10 3. [...] Training hyperparameters are as follows: batch size 256, 800 epochs, LARS optimizer (You et al., 2017) with clipping, base learning rate 0.3, momentum 0.9, trust coefficient 0.02, weight decay 10 4. For Sim CLR, we use temperature 0.2, for VICReg standard hyperparameters (25, 25, 1). |