An Estimation and Analysis Framework for the Rasch Model

Authors: Andrew Lan, Mung Chiang, Christoph Studer

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We now experimentally demonstrate the efficacy of the proposed framework. First, we use synthetically generated data to numerically compare our L-MMSE-based upper bound on the MSE of the PM estimator to the widely-used lower bound based on Fisher information (Zhang et al., 2011; Yang et al., 2012). We then use several real-world collaborative filtering datasets to show that the L-MMSE estimator achieves comparable predictive performance to that of the PM and MAP estimators.
Researcher Affiliation Academia Department of Electrical Engineering, Princeton University, Purdue University, School of Electrical and Computer Engineering, Cornell University.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statement about releasing source code for the described methodology, nor does it include a link to a code repository.
Open Datasets Yes ML, a processed version of the ml-100k dataset from the Movielens project (Herlocker et al., 1999)
Dataset Splits Yes We evaluate the prediction performance of the L-MMSE, MAP, PM, and Logit-MAP estimators using ten-fold cross validation. We randomly divide the entire dataset into ten equally-partitioned folds (of user-item response pairs), leave out one fold as the held-out testing set and use the other folds as the training set. We tune the prior variance parameter σ2 x using a separate validation set (one fold in the training set).
Hardware Specification No The paper mentions 'on a standard laptop computer' when discussing computational efficiency, but does not provide specific hardware details such as CPU/GPU models, processor types, or memory amounts used for the experiments.
Software Dependencies No The paper describes the methods used (e.g., 'standard Gibbs sampling procedure'), but it does not specify any software names with version numbers or list software dependencies with specific versions.
Experiment Setup Yes We vary the number of users U {20, 50, 100} and the number of items Q {20, 50, 100, 200}. We generate the user ability and item difficulty parameters from zero-mean Gaussian distributions with variance σ2 x = σ2 a = σ2 d. We vary σ2 x so that the signalto-noise ratio (SNR) corresponds to { 10, 0, 10} decibels (d B). and We tune the prior variance parameter σ2 x using a separate validation set (one fold in the training set).