Baxter Permutation Process

Authors: Masahiro Nakano, Akisato Kimura, Takeshi Yamada, Naonori Ueda

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the BBP-based relational model with the BNP stochastic block models based on RP: (1) The IRM [33]... (2) The MP [49]... (3) The RTP [42]... We synthetically generated three relational matrices, with ground-truth partitions... We also used four social network datasets... For model comparison, we held out 20% cells of the input data for testing, and each model was trained by the MCMC using the remaining 80% of the cells. We evaluated the models using perplexity as a criterion... Experimental results Table 1 and Figure 8 summarize the test perplexity comparison results.
Researcher Affiliation Industry Masahiro Nakano Akisato Kimura Takeshi Yamada Naonori Ueda NTT Communication Science Laboratories, NTT Corporation {masahiro.nakano.pr,akisato.kimura.xn,takeshi.yamada.bc,naonori.ueda.fr} @hco.ntt.co.jp
Pseudocode Yes Algorithm 1 MAPPING FLOORPLAN PARTITIONING TO BAXTER PERMUTATION
Open Source Code Yes The source code is available at https://github.com/nttcslab/baxter-permutation-process.
Open Datasets Yes We also used four social network datasets [54, 35] (corresponding to Figure 1): Wiki (top-left) [1], consisting of 7115 nodes and 103689 edges with diameter 7. Facebook (top-right) [2], consisting of 4039 nodes and 88234 edges with diameter 8. Twitter (bottom-left) [3], consisting of 81306 nodes and 1768149 edges with diameter 7. Epinion (bottom-right) [4], consisting of 75879 nodes and 508837 edges with diameter 14.
Dataset Splits No The paper states: 'For model comparison, we held out 20% cells of the input data for testing, and each model was trained by the MCMC using the remaining 80% of the cells.' This specifies training and test splits but does not explicitly mention a validation set or its split percentage.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper discusses the use of MCMC methods and references other methods, but it does not specify any software dependencies (e.g., libraries, frameworks, or solvers) with version numbers that would be needed to replicate the experiments.
Experiment Setup No While the paper mentions some model parameters (e.g., α, α0, Gamma(1,1) prior for IRM, budget parameter 3 for MP), it does not provide comprehensive specific experimental setup details such as concrete hyperparameter values (e.g., learning rates, batch sizes, number of epochs) or optimizer settings for the MCMC process.