Approximate Vanishing Ideal via Data Knotting

Authors: Hiroshi Kera, Yoshihiko Hasegawa

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experimental classification tests, our method discovered much fewer and lower-degree polynomials than an existing state-of-the-art method. Consequently, our method accelerated the runtime of the classification tasks without degrading the classification accuracy.
Researcher Affiliation Academia Hiroshi Kera, Yoshihiko Hasegawa Department of Information and Communication Engineering, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
Pseudocode Yes The paper includes 'Algorithm 1 Find Basis', 'Algorithm 2 Exact Vanish Pursuit', and 'Algorithm 3 Main'.
Open Source Code No The paper mentions 'Python implementation' but does not provide any explicit statement about releasing the source code or a link to a code repository for the described methodology.
Open Datasets Yes The datasets were downloaded from UCI machine learning repository (Lichman 2013).
Dataset Splits Yes The hyperparameters were determined by 3-fold cross validation and the results were averaged over ten independent runs. In each run, the datasets were randomly split into training (60%) and test (40%) datasets.
Hardware Specification No The paper states 'Bothe methods were tested by Python implementation on a workstation with four processors and 8GB memory.' This provides some general details, but lacks specific model numbers for the processors or any GPU information, which are required for full reproducibility.
Software Dependencies No The paper mentions 'Python implementation' but does not specify version numbers for Python or any associated libraries/dependencies used in the experiments.
Experiment Setup No The paper states that 'The hyperparameters were determined by 3-fold cross validation', but it does not explicitly provide the specific values of these hyperparameters (e.g., learning rate, batch size, epochs, optimizer settings) or other detailed experimental configuration parameters.