Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dropout Reduces Underfitting
Authors: Zhuang Liu, Zhiqiu Xu, Joseph Jin, Zhiqiang Shen, Trevor Darrell
ICML 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on Image Net and various vision tasks demonstrate that our methods consistently improve generalization accuracy. |
| Researcher Affiliation | Collaboration | 1FAIR, Meta AI 2UC Berkeley 3MBZUAI. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github. com/facebookresearch/dropout. |
| Open Datasets | Yes | We conduct empirical evaluations on Image Net-1K classification with 1,000 classes and 1.2M training images (Deng et al., 2009) |
| Dataset Splits | Yes | We conduct empirical evaluations on Image Net-1K classification with 1,000 classes and 1.2M training images (Deng et al., 2009) and report top-1 validation accuracy. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like "Adam W" and references external libraries/models such as "ConvNeXt" and "PyTorch image models", but it does not specify concrete version numbers for the overall software environment (e.g., Python, PyTorch, CUDA versions). |
| Experiment Setup | Yes | We provide our basic training recipe with specific details in Table 8. This recipe is based on the setting in ConvNeXt (Liu et al., 2022). For the improved recipe, we increase the number of epochs to 600, and reduce mixup and cutmix to 0.3. All other configurations remain unchanged. |