Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

PhantomWiki: On-Demand Datasets for Reasoning and Retrieval Evaluation

Authors: Albert Gong, Kamilė Stankevičiūtė, Chao Wan, Anmol Kabra, Raphael Thesmar, Johann Lee, Julius Klenke, Carla P Gomes, Kilian Q Weinberger

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our evaluation on Phantom Wiki confirms that the benchmark presents significant challenges for all state-of-the-art LLMs that we used. We evaluate reasoning and retrieval capabilities of several frontier LLMs using Phantom Wiki, by decomposing their performance over questions of varying difficulty and universes of varying sizes.
Researcher Affiliation Academia 1Department of Computer Science, Cornell University, Ithaca, New York, USA 2Department of Computer Science and Technology, University of Cambridge, Cambridge, UK. Correspondence to: Albert Gong <EMAIL>, Kamil e Stankeviˇci ut e <EMAIL>, Chao Wan <EMAIL>, Anmol Kabra <EMAIL>.
Pseudocode No The paper describes the Phantom Wiki pipeline in Section 3 and illustrates it with Figure 2, outlining the steps for generating a universe, creating documents, generating questions, and deducing answers using a logic program. However, these steps are described in prose and a diagram, and are not presented in a formally structured pseudocode or algorithm block.
Open Source Code Yes The source code for this work can be found at github.com/kilian-group/phantom-wiki and via pip install phantom-wiki, and the sample Hugging Face datasets are available at kilian-group/phantom-wiki-v1.
Open Datasets Yes the sample Hugging Face datasets are available at kilian-group/phantom-wiki-v1.
Dataset Splits Yes We generate 10 new Phantom Wiki dataset instances (question depth 20 and universe size 50) amounting to 5K training question-answer pairs. We then perform full fine-tuning... We evaluate these models on the three Phantom Wiki dataset instances of size n = 50 and maximum recursion depth 20 (500 questions per dataset instance), the same used in Table 2 and Figure 3.
Hardware Specification Yes generating questions with recursion depth d = 10 for size n = 100K well beyond any existing LLM s context length takes just 6 minutes on 8 Intel Cascade Lake CPU cores. We use a node of 4 A100 GPUs each with 80GB GPU VRAM for each experiment.
Software Dependencies No We use the GRPO implementation from Huggingface s TRL library. Again we use the SFT implementation from Huggingface s TRL library. While the Huggingface TRL library is mentioned, specific version numbers for it or other key software dependencies (e.g., Python, PyTorch) are not provided.
Experiment Setup Yes We cap all LLM outputs to 4096 tokens and use greedy decoding (temperature = 0). For Deep Seek-R1-32B, we use temperature = 0.6 and top-p = 0.95, following Deep Seek AI (2025, Section 3). We set max prompt length = 4096, sufficient to include our prompt of the 50 articles of the universe, and limit the max completion length to = 128. We fine-tune for 3 epochs using the Adam W optimizer with initial learning rate set to 5 * 10^-6 for full fine-tuning and 10^-4 for Lo RA fine-tuning.