Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Making Transformer Decoders Better Differentiable Indexers
Authors: Wuchao Li, Kai Zheng, Defu Lian, Qi Liu, Wentian Bao, Yun Yu, Yang Song, Han Li, Kun Gai
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on three real-world datasets, showing that URI achieves stateof-the-art performance as a generative retrieval method. Moreover, we design various experiments to explain its superior performance from multiple perspectives. ... 5 EXPERIMENT We conduct extensive experiments to answer the following questions: RQ1 What is the overall performance of URI compared to other Generative methods? ... 5.2 OVERALL PERFORMANCE Table 1: Recall(R)@K and NDCG(N)@K Results of URI compared with baseline methods... |
| Researcher Affiliation | Collaboration | 1 University of Science and Technology of China, 2 Kuaishou, 3 Independent EMAIL, EMAIL EMAIL, EMAIL, EMAIL, EMAIL EMAIL |
| Pseudocode | No | The paper describes algorithms like the 'Greedy Algorithm' and EM algorithm steps in prose (Section 4.1) and through equations, but does not present them in structured pseudocode or algorithm blocks with explicit labels like 'Algorithm' or 'Pseudocode'. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating the public release of its source code. |
| Open Datasets | Yes | Datasets We select three widely-used real-world datasets... The Kuai SAR (Sun et al., 2023) dataset... The Beauty and Toys and Games datasets are both from the Amazon platform (He & Mc Auley, 2016). |
| Dataset Splits | No | The paper uses widely-used real-world datasets and specifies parameters for the index (e.g., 'For Kuai SAR dataset, we set the index width k = 64 and depth L = 2'), but it does not explicitly state the training, validation, and test splits (e.g., percentages or sample counts) for these datasets within the document. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU models) used for running the experiments. |
| Software Dependencies | No | For text extraction, we utilize the bert-base-cased (Kenton & Toutanova, 2019) model from Huggingface (Jain, 2022) across all methods. While specific tools are mentioned, version numbers for the Huggingface library or other key software components are not provided, preventing full reproducibility. |
| Experiment Setup | Yes | We set the representation dimension to 96, the number of layers in the Decoder model to 2, the beam size to 20, and the learning rate to 0.001. For NCI and DSI, we set the number of tokens in the final layer to 100, referred to as c in the original papers. For Kuai SAR dataset, we set the index width k = 64 and depth L = 2. For Beauty and Toys datasets, we set k = 32 and L = 2. |