Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Deep Network Approximation: Achieving Arbitrary Accuracy with Fixed Number of Neurons

Authors: Shijun Zhang, Zuowei Shen, Haizhao Yang

JMLR 2022 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we use numerical experimentation to show that replacing the rectified linear unit (Re LU) activation function by ours would improve the experiment results. Keywords: universal approximation property, fixed-size neural network, classification function, periodic function, nonlinear approximation
Researcher Affiliation Academia Zuowei Shen EMAIL Department of Mathematics National University of Singapore Haizhao Yang EMAIL Department of Mathematics University of Maryland, College Park Shijun Zhang EMAIL Department of Mathematics National University of Singapore
Pseudocode No The paper describes methods and mathematical proofs in prose and figures, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We will design convolutional neural network (CNN) architectures activated by Re LU or EUAF to solve a classification problem corresponding to a standard benchmark data set Fashion-MNIST (Xiao et al., 2017). This data set consists of a training set of 60000 samples and a test set of 10000 samples.
Dataset Splits Yes We randomly choose 10^6 training samples and 10^5 test samples in [0, 1]^2. This data set consists of a training set of 60000 samples and a test set of 10000 samples.
Hardware Specification No The paper discusses concepts related to hardware balance and memory requirements but does not specify any particular hardware used for running its experiments.
Software Dependencies No To enable the automatic differentiation feature for EUAF, we need to implement EUAF based on PyTorch built-in functions. With the following four built-in functions abs(x) = |x|, floor(x) = x , softsign(x) = x |x| + 1, and sign(x) = ... we can represent EUAF as... We adopt RAdam (Liu et al., 2020) as the optimization method...
Experiment Setup Yes The number of epochs and the batch size are set to 500 and 256, respectively. We adopt RAdam (Liu et al., 2020) as the optimization method and the learning rate is 0.002 0.9i 1 in epochs 5(i 1) + 1 to 5i for i = 1, 2, , 100. Several loss functions are used to estimate the training and test losses, including the mean squared error (MSE), the mean absolute error (MAE), and the maximum (MAX) loss functions. The MSE loss is used in our training process. The number of epochs and the batch size are set to 500 and 128, respectively. We adopt RAdam (Liu et al., 2020) as the optimization method. The weight decay of the optimizer is 0.0001 and the learning rate is 0.002 0.9i 1 in epochs 5(i 1)+1 to 5i for i = 1, 2, , 100.