Learning from Stochastically Revealed Preference

Authors: John Birge, Xiaocheng Li, Chunlin Sun

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate the algorithm performance through numerical experiments.
Researcher Affiliation Academia John R. Birge The University of Chicago Booth School of Business John.Birge@chicagobooth.edu; Xiaocheng Li Imperial College Business School, Imperial College London xiaocheng.li@imperial.ac.uk; Chunlin Sun Institute for Computational and Mathematical Engineering, Stanford University chunlin@stanford.edu
Pseudocode Yes Algorithm 1 Posterior Sampling for the Gaussian Setting; Algorithm 2 Simulated annealing algorithm for δ-corruption
Open Source Code No The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets No The paper uses synthetically generated data based on specified distributions (e.g., 'a Unif([1, 2]n) and b Unif([1, n])') and does not refer to any publicly available dataset with concrete access information.
Dataset Splits No The paper does not specify explicit training/validation/test dataset splits (e.g., percentages or counts) or refer to standard predefined splits for its numerical experiments.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications, or cloud instance types) used for running its experiments.
Software Dependencies No The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments.
Experiment Setup Yes Algorithm 1 and 2 mention 'number of iterations K'. Algorithm 2 also specifies 'margin γ', 'initial (temperature) η > 0 and the reduction rate c (0, 1)', and 'interval length τ'. For numerical experiments, it's stated 'we run both Algorithm 1 and Algorithm 2 for K = 1000 iterations'.