Beyond Exchangeability: The Chinese Voting Process
Authors: Moontae Lee, Seok Hyun Jin, David Mimno
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate this model on Amazon product reviews and more than 80 Stack Exchange forums, measuring the intrinsic quality of individual responses and behavioral coefficients of different communities. |
| Researcher Affiliation | Academia | Moontae Lee Dept. of Computer Science Cornell University Ithaca, NY 14853 moontae@cs.cornell.edu Seok Hyun Jin Dept. of Computer Science Cornell University Ithaca, NY 14853 sj372@cornell.edu David Mimno Dept. of Information Science Cornell University Ithaca, NY 14853 mimno@cornell.edu |
| Pseudocode | No | Table 2 describes a 'Generative process' with sample parametrization, but it is a mathematical formulation of the model rather than structured pseudocode or an algorithm block for implementation. |
| Open Source Code | No | The paper provides a link for dataset access ('Dataset and statistics are available at https://archive.org/details/stackexchange.') but does not provide any link or explicit statement about the availability of the source code for the described methodology. |
| Open Datasets | Yes | The Amazon dataset [16] originally consisted of 595 products with daily snapshots of writing/voting trajectories from Oct 2012 to Mar 2013. For the Stack Exchange dataset8, we filter out questions from each community with fewer than five answers besides the answer chosen by the question owner.9 We drop communities with fewer than 100 questions after pre-processing. Footnote 8: Dataset and statistics are available at https://archive.org/details/stackexchange. |
| Dataset Splits | No | The paper describes training the model 'up to time t' and predicting 'at t + 1', but does not explicitly provide details about validation splits, specific percentages, or sample counts for data partitioning beyond this temporal split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | No | The paper does not provide specific details about the experimental setup, such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or other detailed training configurations. |