Fast and Stable Maximum Likelihood Estimation for Incomplete Multinomial Models

Authors: Chenyang Zhang, Guosheng Yin

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our algorithm runs faster than existing methods on synthetic data and real data. In this section, we study large-sample properties and numerical performances of our algorithm on synthetic data and real data.
Researcher Affiliation Academia Chenyang Zhang 1 Guosheng Yin 1 1Department of Statistics and Actuarial Science, University of Hong Kong, Hong Kong. Correspondence to: Guosheng Yin <gyin@hku.hk>.
Pseudocode Yes Algorithm 1 Stable Weaver Input: Observations (a, b, ) Initialize: p(0) = (1/K, . . . , 1/K) τ = b/ p(t) (element-wise division) τ + = max(τ, 0), τ = min(τ, 0) p(t+1) = a + ( τ +) p(t) /(s1 τ ) ( represents element-wise product) p(t+1) = p(t+1)/sum(p(t+1)) until convergence
Open Source Code No No explicit statement about providing open-source code for the described methodology or a link to a code repository was found.
Open Datasets Yes The paper uses well-known datasets such as 'NASCAR (Hunter, 2004)' and custom collected datasets like 'HKJC1416' and 'HKJC9916', referring to published work for NASCAR data.
Dataset Splits No The paper specifies convergence criteria (e.g., 'ϵ = 10 9 for the simulation studies in Section 4.1 and ϵ = 10 6 for the remaining experiments') but does not provide specific details about training, validation, or test dataset splits in terms of percentages, counts, or explicit splitting methodology.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions 'All the methods are coded in the form of matrix operations' but does not provide specific software dependencies or libraries with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper describes initialization for its algorithm ('p(0) = (1/K, . . . , 1/K)') and convergence tolerance ('ϵ = 10 9' or 'ϵ = 10 6'), but does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs for a learning model), optimizer settings, or detailed system-level training configurations.