Actively Learning Hemimetrics with Applications to Eliciting User Preferences

Authors: Adish Singla, Sebastian Tschiatschek, Andreas Krause

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a restaurant recommendation data set support the conclusions of our theoretical analysis.
Researcher Affiliation Academia Adish Singla ADISH.SINGLA@INF.ETHZ.CH Sebastian Tschiatschek SEBASTIAN.TSCHIATSCHEK@INF.ETHZ.CH Andreas Krause KRAUSEA@ETHZ.CH ETH Zurich, Switzerland
Pseudocode Yes Algorithm 1 Our Algorithm: LEARNHM, Algorithm 2 Updating Lower & Upper Bounds: LU-PROJ, Algorithm 3 Updating Upper Bounds: U-PROJ, Algorithm 4 Updating Lower Bounds: L-PROJ, Algorithm 5 Query Policy: QCLIQUE
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes We use the recently proposed Yelp Dataset Challenge (round 7) data for our experiments. This data contains information about 77K businesses located across 10 cities around the world. We looked into businesses belonging to the category Restaurants and being located in the city of Pittsburgh, PA. The link is https://www.yelp.com/dataset_challenge/
Dataset Splits No The paper does not specify explicit training, validation, or test dataset splits. The experimental setup describes how the Yelp dataset was used to simulate user preference models and measure sample complexity, which is different from standard data splitting for model training and evaluation.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes In the experiments we used r = 1, and varied ϵ and n. When varying n (by sub-sampling), we used ϵ = 0.01. We quantify the distance between item i and j by Wi,j = w1W cuisine i,j + w2W review i,j + w3W geo i,j + w4W random i,j. In the first model (Yelp M1), we use w1 = 0.9, w4 = 0.1 and w2 = w3 = 0. The second model (Yelp M2) is more generic, with weights given by w1 = 0.5, w2 = 0.2, w3 = 0.2, w4 = 0.1.