Adversarial Surrogate Losses for Ordinal Regression

Authors: Rizal Fathony, Mohammad Ali Bashiri, Brian Ziebart

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct our experiments on a benchmark dataset for ordinal regression [14], evaluate the performance using mean absolute error (MAE), and perform statistical tests on the results of different hinge loss surrogate methods.
Researcher Affiliation Academia Rizal Fathony Mohammad Bashiri Brian D. Ziebart Department of Computer Science University of Illinois at Chicago Chicago, IL 60607 {rfatho2, mbashi4, bziebart}@uic.edu
Pseudocode No The paper describes optimization approaches (e.g., 'stochastic optimization', 'quadratic program') but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We conduct our experiments on a benchmark dataset for ordinal regression [14], ... The benchmark contains datasets taken from the UCI Machine Learning repository [39]...
Dataset Splits Yes In the experiment, we first make 20 random splits of each dataset into training and testing sets. We performed two stages of five-fold cross validation on the first split training set for tuning each model s regularization constant λ.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory, cloud instances) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, library versions, or solver versions) required for replication.
Experiment Setup Yes In the first stage, the possible values for λ are 2 i, i = {1, 3, 5, 7, 9, 11, 13}. Using the best λ in the first stage, we set the possible values for λ in the second stage as 2 i 2 λ0, i = { 3, 2, 1, 0, 1, 2, 3}, where λ0 is the best parameter obtained in the first stage. ... we set C equals to 2i C0, i = { 2, 1, 0, 1, 2} and γ equals to 2iγ0, i = { 2, 1, 0, 1, 2}, where C0 and γ0 are the best parameters obtained in the first stage.