Collaboratively Learning Preferences from Ordinal Data

Authors: Sewoong Oh, Kiran K. Thekumparampil, Jiaming Xu

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the convex relaxation approach in two contexts of interest: collaborative ranking and bundled choice modeling. In both cases, we show that the convex relaxation is minimax optimal. We prove an upper bound on the resulting error with finite samples, and provide a matching information-theoretic lower bound. ... The left panel of Figure 1 confirms the scaling of the error rate as predicted by Corollary 3.1. The lines merge to a single line when the sample size is rescaled appropriately. ... The root mean squared error (RMSE) is plotted where RMSE = (1/d)|||Θ bΘ|||F. We implement and solve the convex optimization (3) using proximal gradient descent method as analyzed in [24].
Researcher Affiliation Academia Sewoong Oh , Kiran K. Thekumparampil University of Illinois at Urbana-Champaign {swoh,thekump2}@illinois.edu Jiaming Xu The Wharton School, UPenn jiamingx@wharton.upenn.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statements about releasing open-source code or links to a code repository for the described methodology.
Open Datasets No The paper describes a theoretical sampling model for generating data ('Si = {ji,1, . . . , ji,k} where ji,ℓ s are independently drawn uniformly at random over the d2 items') but does not specify the use of any publicly available or open datasets with concrete access information.
Dataset Splits No The paper describes theoretical guarantees and numerical experiments with synthetically generated data but does not explicitly provide training, validation, or test dataset splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the numerical experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions implementing and solving the convex optimization using 'proximal gradient descent method as analyzed in [24]' but does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup Yes We make a choice of λ = (1/2) p (log d)/(kd2), This choice is independent of α and is smaller than proposed in Theorem 1. We generate random rank-r matrices of dimension d d, where Θ = UV T with U Rd r and V Rd r entries generated i.i.d from uniform distribution over [0, 1]. Then the row-mean is subtracted form each row, and then the whole matrix is scaled such that the largest entry is α = 5. Note that this operation does not increase the rank of the matrix Θ. ... The root mean squared error (RMSE) is plotted where RMSE = (1/d)|||Θ bΘ|||F. We implement and solve the convex optimization (3) using proximal gradient descent method as analyzed in [24].