Initializing Services in Interactive ML Systems for Diverse Users

Authors: Avinandan Bose, Mihaela Curmei, Daniel Jiang, Jamie H. Morgenstern, Sarah Dean, Lillian Ratliff, Maryam Fazel

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The theory is complemented by experiments on real as well as semi-synthetic datasets.
Researcher Affiliation Academia Avinandan Bose University of Washington avibose@cs.washington.edu Mihaela Curmei University of California Berkeley mcurmei@berkeley.edu Daniel L. Jiang University of Washington danji@cs.washington.edu Jamie Morgenstern University of Washington jamiemmt@cs.washington.edu Sarah Dean Cornell University sdean@cornell.edu Lillian J. Ratliff University of Washington ratliffl@uw.edu Maryam Fazel University of Washington mfazel@uw.edu
Pseudocode Yes Algorithm 1 Ac QUIre Adaptively Querying Users for Initialization
Open Source Code Yes All our code is available at https://anonymous.4open.science/r/ Multi Service Initialization-A422
Open Datasets Yes using 2021 US Census data... online movie recommendation task using the Movielens10M dataset... Movielens10M data set [19]
Dataset Splits No The paper explicitly mentions 'train set' and 'test set' for the Movielens dataset with a 50/50 split, but it does not specify a separate 'validation' set for model tuning.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications. In the NeurIPS Paper Checklist, it states: 'All our experiments can be run on personal devices.'
Software Dependencies No The paper mentions using 'Surprise (a Python toolkit [23])' for movie recommendations but does not specify its version number. It also mentions 'least squares regression' but not a specific library or its version.
Experiment Setup No The paper describes the general steps of the experiment, such as user selection strategies and how services are updated. However, it does not provide specific numerical details for hyperparameters (e.g., learning rates, batch sizes, number of epochs) or system-level training settings for the models used in the experiments.