Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Expressivity of ReLU-Networks under Convex Relaxations

Authors: Maximilian Baader, Mark Niklas Mueller, Yuhao Mao, Martin Vechev

ICLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we prove the following key results:
Researcher Affiliation Academia Department of Computer Science ETH Zurich, Switzerland EMAIL
Pseudocode No The paper contains mathematical derivations, lemmas, and theorems, but no explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links regarding the availability of open-source code for the described methodology.
Open Datasets No This paper is theoretical, focusing on mathematical proofs and expressivity. It does not conduct experiments using datasets, and thus provides no information about public dataset availability.
Dataset Splits No As a theoretical paper, it does not involve data splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware for execution.
Software Dependencies No The paper focuses on theoretical concepts and mathematical proofs and does not specify software dependencies with version numbers.
Experiment Setup No As a theoretical paper, it does not include details on experimental setup, hyperparameters, or training configurations.