Identifiability Matters: Revealing the Hidden Recoverable Condition in Unbiased Learning to Rank

Authors: Mouxiang Chen, Chenghao Liu, Zemin Liu, Zhuo Li, Jianling Sun

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5. Experiments In this section, we describe our experimental setup and show the empirical results, in both the fully synthetic setting and large-scale study.
Researcher Affiliation Collaboration Mouxiang Chen 1 Chenghao Liu 2 Zemin Liu 3 Zhuo Li 4 Jianling Sun 1 1Zhejiang University 2Salesforce Research Asia 3National University of Singapore 4State Street Technology (Zhejiang) Ltd.
Pseudocode Yes Based on Theorem 1, we illustrate the identifiability check in Algorithm 1.
Open Source Code Yes Code is available at https://github.com/Keytoyze/ULTR-identifiability
Open Datasets Yes The datasets can be downloaded from https://webscope.sandbox.yahoo.com/ (Yahoo!), http://quickrank.isti.cnr.it/istella-dataset/ (Istella-S) and http://www.thuir.cn/tiangong-st/ (Tian Gong-ST).
Dataset Splits Yes We followed the given data split of training, validation, and testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using “Light GBM” and the “ULTRA framework” but does not provide specific version numbers for these or other key software components used in the experiments.
Experiment Setup Yes The total number of trees was 500, the learning rate was 0.1, number of leaves for one tree was 255.