Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the Importance of Uncertainty in Decision-Making with Large Language Models
Authors: Nicolò Felicioni, Lucas Maystre, Sina Ghiassian, Kamil Ciosek
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically show on real-world data that the greedy policy performs worse than the Thompson Sampling policies. These findings suggest that, while overlooked in the LLM literature, uncertainty improves performance on bandit tasks with LLM agents. |
| Researcher Affiliation | Collaboration | Nicolò Felicioni* EMAIL Politecnico di Milano Lucas Maystre EMAIL Spotify Sina Ghiassian EMAIL Spotify Kamil Ciosek EMAIL Spotify |
| Pseudocode | Yes | Algorithm 1 Greedy Require: Bandit model fθ. 1: Initialize D 2: for time t = 1, . . . , T do ... Algorithm 2 Thompson Sampling Require: Bandit model fθ. Require: Prior distribution on the parameters P(θ) N(θp, Σp). 1: Initialize D 2: for time t = 1, . . . , T do ... |
| Open Source Code | No | The text is ambiguous or lacks a clear, affirmative statement of release. |
| Open Datasets | Yes | An open-source dataset, called measuring hate speech , which is openly available on Hugging Face Datasets4. ... An open-source dataset, called IMDb (Maas et al., 2011), which is openly available on Hugging Face Datasets5. ... An open-source dataset, called Offensive Language Identification (Zampieri et al., 2019), which is openly available on Hugging Face Datasets6. ... An open-source dataset, called Hat Eval (Basile et al., 2019), which is openly available on Hugging Face Datasets7. |
| Dataset Splits | No | The paper uses a sequential decision-making framework (contextual bandits) where data is observed in batches over time steps (e.g., 'At each time step t = 1, 2, . . . , T, the agent observes a batch of contexts... For each time step, the agent will observe a batch of B = 32 comments.') rather than predefined training, validation, and test splits typically found in supervised learning tasks. |
| Hardware Specification | Yes | Our experiments were conducted on one NVIDIA A100 GPU with 80GBs of VRAM. |
| Software Dependencies | No | The paper mentions using a pre-trained GPT2 model and the Hugging Face library, and the Adam optimizer, but does not provide specific version numbers for these software components (e.g., 'We use the implementation provided by the Hugging Face library15.' without a version number for the library itself). |
| Experiment Setup | Yes | Every model is trained with regularized MSE loss as in Eq. 3... We train each model at the end of each time step for 50 epochs with the Adam optimizer (Kingma & Ba, 2014), with learning rate set to 3 10 5... For each model, hyperparameters are tuned on 10 random runs... We did not tune the dropout probability because we wanted to exploit the fact that GPT2 was pre-trained with dropout p = 0.1... For this set of experiments, we train our models for 5 epochs for each batch of data. (Table 1 showing tuned hyperparameters like 'Regularization factor λ 1', 'Prior variance σ2 p 0.0001', 'Obs. variance σ2 obs 0.01' is also provided). |