Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Skip Connections Eliminate Singularities
Authors: Emin Orhan, Xaq Pitkow
ICLR 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | These hypotheses are supported by evidence from simplified models, as well as from experiments with deep networks trained on real-world datasets. |
| Researcher Affiliation | Academia | A. Emin Orhan Xaq Pitkow EMAIL EMAIL Baylor College of Medicine & Rice University |
| Pseudocode | No | The paper does not contain any sections, figures, or blocks explicitly labeled as 'Pseudocode' or 'Algorithm'. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | The networks were trained on the CIFAR-100 dataset (with coarse labels) using the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.0005 and a batch size of 500. |
| Dataset Splits | No | The paper states 'We used the standard splits of the data into training and test sets.' but does not explicitly mention or provide details for a separate validation set split, its size, or how it was used. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU models, CPU types, or cloud computing instance details used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer (Kingma & Ba, 2014)' but does not specify any version numbers for Adam or any other software libraries, frameworks, or programming languages used. |
| Experiment Setup | Yes | The networks were trained on the CIFAR-100 dataset (with coarse labels) using the Adam optimizer (Kingma & Ba, 2014) with learning rate 0.0005 and a batch size of 500. |