Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Efficient Statistical Assessment of Neural Network Corruption Robustness
Authors: Karim TIT, Teddy Furon, Mathias ROUSSET
NeurIPS 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments tackling large scale networks outline the efficiency of our method making a low number of calls to the network function. This section presents experimental results on ACAS Xu, MNIST, and Image Net datasets with some trained classifications networks listed in App. F.3 together with implementation details. |
| Researcher Affiliation | Collaboration | Karim Tit Thales Land and Air Systems, BU IAS Rennes, France Univ. Rennes, Inria, CNRS, IRISA Rennes, France EMAIL Teddy Furon Univ. Rennes, Inria, CNRS, IRISA Rennes, France EMAIL Mathias Rousset Univ. Rennes, Inria, CNRS, IRMAR Rennes, France EMAIL |
| Pseudocode | Yes | Alg. 1 gives the pseudo-code of our procedure. Algorithm 1 Robustness assessment with Last Particle simulation. Algorithm 2 Sampling one particle Gen(L, 1) |
| Open Source Code | No | The paper states 'Yet, we provide a code processing several inputs xo in parallel' in the conclusion, but it does not provide a specific link or explicit statement about the public release of the source code for the methodology described in the paper. |
| Open Datasets | Yes | This section presents experimental results on ACAS Xu, MNIST, and Image Net datasets. MNIST [Le Cun et al., 1990]. Image Net dataset [Deng et al., 2009]. |
| Dataset Splits | No | The paper mentions using standard datasets like ACAS Xu, MNIST, and Image Net, and refers to '100 test images from Image Net dataset', but it does not provide explicit details about the training, validation, or test data splits (e.g., percentages, sample counts, or specific split methodologies) for reproducibility. |
| Hardware Specification | Yes | Experiences were run on a laptop PC (CPU=Intel(R) Core(TM) i7-9750H, GPU=Ge Force RTX 2070) except for experiences on Image Net which were run on a Nvidia V100 GPU. |
| Software Dependencies | No | The paper mentions using the 'Gurobi optimizer' in the context of the ERAN benchmark, but it does not specify version numbers for Gurobi or any other key software components, libraries, or programming languages used in their own experiments. |
| Experiment Setup | Yes | We run our algorithm with N = 2, pc = 10 35 and t = 40. |