Uncoupled Regression from Pairwise Comparison Data
Authors: Liyuan Xu, Junya Honda, Gang Niu, Masashi Sugiyama
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Moreover, we empirically show that for linear models the proposed methods are comparable to ordinary supervised regression with labeled data. and In this section, we present the empirical performances of the proposed methods in experiments based on synthetic data and benchmark data. |
| Researcher Affiliation | Academia | Liyuan Xu 1,2 Junya Honda 1,2 liyuan@ms.k.u-tokyo.ac.jp honda@stat.t.u-tokyo.ac.jp Gang Niu 2 Masashi Sugiyama 2,1 gang.niu@riken.jp sugi@k.u-tokyo.ac.jp 1The University of Tokyo 2RIKEN |
| Pseudocode | No | The paper describes the methods textually and mathematically but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing code or links to a code repository. |
| Open Datasets | Yes | Result for benchmark datasets. We conducted the experiments for the benchmark datasets as well, in which we do not know true marginal PY . The details of benchmark datasets can be found in Appendix A. We use the original features as unlabeled data DU. ... D. Dua and C. Graff. UCI machine learning repository, 2017. URL http://archive.ics. uci.edu/ml. [8] |
| Dataset Splits | Yes | The performance is also evaluated by the mean squared error (MSE) in the held-out test data. and Density function f Y is estimated from target values in the dataset by kernel density estimation [25] with the Gaussian kernel. Here, the bandwidth of Gaussian kernel is determined by cross-validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using SVMRank [13] but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In all experiments, we consider l2-loss l(z, t) = (z t)2, which corresponds to setting φ(x) = x2 in Bregman divergence dφ(t, z). ... We employ hypothesis space of linear functions H = {h(x) = >x | 2 Rd} for the RA method. A slightly different hypothesis space H0 = {h(x) = F 1 Y (σ( >x)) | 2 Rd} is employed for the TT method in order to simplify the loss, where σ is logistic function σ(x) = 1/(1 + exp( x)). The procedure of hyper-parameter tuning in RRA and RTT can be found in Appendix A. |