Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PowerMLP: An Efficient Version of KAN

Authors: Ruichen Qiu, Yibo Miao, Shiwen Wang, Yifan Zhu, Lijia Yu, Xiao-Shan Gao

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experiments demonstrate that Power MLP generally achieves higher accuracy and a training speed about 40 times faster than KAN in various tasks. (...) In this section, we employ several experiments to validate these theoretical findings and demonstrate the advantages of Power MLP. Four experiments are conducted.
Researcher Affiliation Academia 1School of Advanced Interdisciplinary Sciences, UCAS, Beijing 100049, China 2Academy of Mathematics and Systems Science, CAS, Beijing 100190, China 3University of Chinese Academy of Sciences, Beijing 101408, China 4Institute of Software, CAS, Beijing 100190, China 5State Key Laboratory of Computer Science EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes mathematical formulations and network structures but does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Iri-sated/Power MLP.
Open Datasets Yes Power MLP is tested on a regression task for 16 special functions in KAN s experiments (Liu et al. 2024). (...) we conduct two experiments on Titanic and Income (Becker and Kohavi 1996), which are classification tasks of small input dimension. (...) We conduct two experiments on SMS Spam Collection (Spam) (G omez Hidalgo et al. 2006) and AG NEWS (Zhang, Zhao, and Le Cun 2015) dataset (...) For image classification tasks, we conduct two experiments on MNIST (Le Cun et al. 1998) and SVHN (Netzer et al. 2011) datasets.
Dataset Splits No The paper mentions using 'test accuracy' and 'test RMSE loss' and conducting experiments on various datasets, implying dataset splits were used. However, it does not explicitly provide specific percentages, sample counts, or methodologies for these splits (e.g., '80/10/10 split' or 'standard train/test split from [citation]').
Hardware Specification Yes For better comparison, the experiments are on a single NVIDIA Ge Force RTX 4090 GPU, repeated each task 10 times to take an average, and networks in each task are trained with the same hyperparameters.
Software Dependencies No The paper mentions 'Py Torch (Molchanov et al. 2017)' in the context of calculating FLOPs, but it does not specify the version of PyTorch or any other software libraries used for the implementation of Power MLP or its experiments.
Experiment Setup No The paper states that 'networks in each task are trained with the same hyperparameters' but does not explicitly list the specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) used for the experiments. It details network shapes and parameter counts, which are architectural properties, not training hyperparameters.