Gradient Boosts the Approximate Vanishing Ideal

Authors: Hiroshi Kera, Yoshihiko Hasegawa4428-4435

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results We compare four basis construction algorithms, VCA, SBC with the coefficient normalization (SBC-nc), SBC-ng, and SBC-nc with the basis reduction. All experiments were performed using Julia implementations on a desktop machine with an eight-core processor and 32 GB memory.
Researcher Affiliation Academia Hiroshi Kera, Yoshihiko Hasegawa Department of Information and Communication Engineering, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
Pseudocode No The paper describes algorithms in a step-by-step prose format with mathematical equations, but it does not include formally structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We used three small standard datasets (Iris, Vowel, and Vehicle) from the UCI dataset repository (Lichman 2013).
Dataset Splits Yes Parameter ϵ was selected by 3-fold cross-validation. Because Iris and Vehicle do not have prespecified training and test sets, we randomly split each dataset into a training set (60%) and test set (40%), which were mean-centralized and normalized so that the mean norm of data points is equal to one.
Hardware Specification Yes All experiments were performed using Julia implementations on a desktop machine with an eight-core processor and 32 GB memory.
Software Dependencies No The paper mentions 'Julia implementations' and 'LIBLINEAR (Fan et al. 2008)' but does not specify version numbers for Julia or any other key software libraries used for their methods, only implicitly for LIBLINEAR via its publication date.
Experiment Setup Yes The parameter ϵ is selected so that (i) the number of linear vanishing polynomials in the basis set agrees with the number of additional variables yi and (ii) except for these linear polynomials, the lowest degree (say, dmin) of the polynomials agree with that of the Gr obner basis of the target variety and the number of degree-dmin polynomials in the basis set agrees with or exceeds that of the Gr obner basis. Refer to the supplementary material for details. [...] Parameter ϵ was selected by 3-fold cross-validation. Because Iris and Vehicle do not have prespecified training and test sets, we randomly split each dataset into a training set (60%) and test set (40%), which were mean-centralized and normalized so that the mean norm of data points is equal to one. [...] We trained ℓ2-regularized logistic regression with a one-versus-the-rest strategy using LIBLINEAR (Fan et al. 2008).