Cost-Sensitive Learning to Rank

Authors: Ryan McBride, Ke Wang, Zhouyang Ren, Wenyuan Li4570-4577

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We run experiments to validate the benefits of our new solutions on both proprietary and public data sets. In experiments, we validate two claims:
Researcher Affiliation Academia Ryan Mc Bride, Ke Wang Simon Fraser University, BC, Canada rom2@sfu.ca and wangk@cs.sfu.ca Zhouyang Ren, Wenyuan Li Chongqing University, Chongqing, China rzhouyang@gmail.com and wenyuan.li@ieee.org
Pseudocode No The paper describes algorithms such as Cost-Sensitive MART, Cost-Sensitive Coordinate Ascent, and CS-Ada Rank conceptually, explaining how they adapt existing methods. However, it does not provide any explicitly labeled
Open Source Code No The paper mentions
Open Datasets Yes We consider two proprietary outage data sets and three public UCI data sets. Attributes and details on each data set are provided in Table 2.
Dataset Splits Yes Testing uses five-fold cross-validation via LETOR s separation of data with three folds used for training, one for validation, and one for testing (Qin et al. 2010).
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions
Experiment Setup Yes For the two outage data sets, we use ks of 10 and 50, based on domain knowledge on how many networks may be strengthened before a storm given a 24 hour lead time. For the other data sets, we use two ks: a low k (12.5% of the average number of instances in a list) and a more medium k (25% of the average list length). Rank Lib does not support missing values or categorical attributes so we removed any attribute with missing values and convert each categorical attribute with C categories into C binary attributes.