BaKer-Nets: Bayesian Random Kernel Mapping Networks

Authors: Hui Xue, Zheng-Fan Wu

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Systematical experiments demonstrate the significance of Ba Ker-Nets in improving learning processes on the premise of preserving the structural superiority. In this section, we systematically evaluate the practical performance of Ba Ker-Nets compared with state-of-the-art related algorithms.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China 2Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing, 210096, China {hxue, zfwu}@seu.edu.cn
Pseudocode No The paper includes mathematical formulations and derivations but no explicitly labeled "Pseudocode" or "Algorithm" block.
Open Source Code No The paper does not provide any statement or link about open-sourcing the code for the described methodology.
Open Datasets Yes Firstly, we conduct a benchmark experiment on four classification datasets and four regression datasets, which are collected from UCI [Blake and Merz, 1998] and LIBSVM [Chang and Lin, 2011]. Secondly, we conduct an image classification experiment on MNIST, FMNIST and CIFAR10 [Le Cun et al., 1998; Xiao et al., 2017; Krizhevsky et al., 2009].
Dataset Splits No The paper mentions data is "randomly divided into non-overlapping training and test sets" for benchmark datasets and "The division of image datasets is consistent with their default settings", but it does not explicitly mention a validation set or provide details for one.
Hardware Specification No The paper describes general experimental settings like "the scales of all deep architectures are set to 1000 500 50" but does not specify any particular hardware components (e.g., GPU, CPU models, memory details) used for the experiments.
Software Dependencies No The paper mentions that algorithms are "optimized by Adam [Kingma and Ba, 2014]" and references "Automatic differentiation in pytorch. [Paszke et al., 2017]", but it does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes the scales of all deep architectures are set to 1000 500 50. Sigmoid activation is applied to deep neural networks. Moreover, all algorithms are initialized according to the Xavier method [Glorot and Bengio, 2010], and are optimized by Adam [Kingma and Ba, 2014]. The learning rate is initially set to a commonly-used default value 0.001 [Paszke et al., 2017], which is automatically tuned by the optimizer. Epochs are set to be large enough to ensure the convergence for all algorithms.