Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

LayerAct: Advanced Activation Mechanism for Robust Inference of CNNs

Authors: Kihyuk Yoon, Chiehyeon Lim

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three clean and noisy benchmark datasets for image classification tasks indicate that Layer Act functions outperform other activation functions in handling noisy datasets while achieving superior performance on clean datasets in most cases. Experimental analysis with the MNIST dataset demonstrates that Layer Act functions have the following properties: (1) the mean activation of Layer Act functions is zerolike, and (2) the output fluctuation due to noisy input is smaller with these functions than with Element Act functions. We compared the performance of the residual networks (Res Nets; He et al. (2016)) with Layer Act functions to those with other Element Act functions on three image classification tasks.
Researcher Affiliation Academia Kihyuk Yoon, Chiehyeon Lim UNIST, Ulsan, Republic of Korea EMAIL, EMAIL
Pseudocode No The paper describes the proposed method using mathematical formulations and textual explanations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code & Appendix https://github.com/Kihyuk Yoon/Layer Act
Open Datasets Yes Experimental analysis with the MNIST dataset demonstrates that Layer Act functions have the following properties: (1) the mean activation of Layer Act functions is zerolike, and (2) the output fluctuation due to noisy input is smaller with these functions than with Element Act functions. We demonstrate the classification performance of Layer Act on CIFAR10, Image Net (Russakovsky et al. 2015), and a medical image dataset (Goodman et al. 2018).
Dataset Splits Yes We trained a network with a single layer that containing 512 elements on the MNIST training dataset without any noise to observe the behavior of Layer Act functions during training. We report the accuracy of 10-crop testing on the validation dataset. The networks were trained on CIFAR10.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library or framework names with version numbers.
Experiment Setup No For the detailed experimental setting to ensure reproducibility, see Appendix F. This indicates that specific experimental setup details are provided, but they are located in an appendix rather than the main text of the paper.