Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models

Authors: Juho Lee, Seungjin Choi

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and realworld datasets demonstrate the benefit of our method.
Researcher Affiliation Academia Juho Lee and Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea {stonecold,seungjin}@postech.ac.kr
Pseudocode No The paper describes the algorithm steps in textual form within sections 3.1, 3.2, and 3.3, but it does not include a formally structured pseudocode block or an algorithm figure.
Open Source Code No The paper states 'More details on the algorithm can be found in the supplementary material.' but does not explicitly state that source code for the methodology is provided or offer a direct link to a code repository.
Open Datasets Yes NIPS corpus4, containing 1,500 documents with 12,419 words. 4https://archive.ics.uci.edu/ml/datasets/Bag+of+Words
Dataset Splits No The paper mentions using a 'toy dataset' and a '10K dataset' and running experiments for specific durations and repetitions, but it does not specify train/validation/test splits, sample counts for each split, or reference any standard split methodologies for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU models, or cloud computing instance types.
Software Dependencies No The paper describes the algorithms and models used (e.g., Gaussian-likelihood, Gaussian-Wishart base measure, multinomial likelihood, Dirichlet base measure), but it does not list any specific software components or libraries with their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Setting G = 20 and D = 2 were the moderate choice for all data we ve tested. All the samplers were run for 10 seconds, and repeated 10 times. We ran the samplers for 1,000 seconds and repeated 10 times. We ran the samplers for 10,000 seconds and repeated 10 times.