Shuffled Deep Regression

Authors: Masahiro Kohjima

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the proposal is confirmed by benchmark data experiments. Experiments conducted on benchmark datasets confirm the effectiveness of the proposals. We confirm the effectiveness of the proposed method by numerical experiments on benchmark data.
Researcher Affiliation Industry Masahiro Kohjima NTT Human Informatics Laboratories, NTT Corporation 1-1 Hikarinooka, Yokosuka, Kanagawa 239-0847, Japan masahiro.kohjima.ev@hco.ntt.co.jp
Pseudocode Yes Algorithm 1: SSEM algorithm for Shuffled Deep Regression
Open Source Code No The paper does not provide any concrete access to source code for the methodology described, such as a repository link or an explicit statement about code release.
Open Datasets Yes As the regression problem instances we used four publicly available data sets provided in the UCI machine learning repository4: auto-MPG data (MPG), abalone data (Abalone), Boston housing data (Boston), and concrete compressive strength data (Concrete). 4https://archive.ics.uci.edu/ml/index.php
Dataset Splits Yes We prepared 5 data sets by randomly dividing the data and using 60% for training, 20% for validation, and 20% for testing.
Hardware Specification Yes Experiments were run on a computer with an Intel Xeon CPU, and a Ge Force GTX TITAN GPU.
Software Dependencies No The paper mentions software used, such as "PyTorch (Paszke et al. 2019)" and "Adam (Kingma and Ba 2014)", but does not provide specific version numbers for these software dependencies, which is required for reproducibility.
Experiment Setup Yes Both DR and SDR used a one-hidden-layer feedforward neural network with the Re LU activation function. The number of units was set to 20 for all problems. The parameters were optimized using Adam (Kingma and Ba 2014) with learning rate of 0.001. The mini-batch size of SDR and that of DR were 32/K and 32, respectively. The maximum number of epochs was 2000, and we used early-stopping with validation data (which were also shuffled data) (Goodfellow, Bengio, and Courville 2016).