Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Approximating Naive Bayes on Unlabelled Categorical Data
Authors: Cormac Herley
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the algorithm on simulated data. We choose a random uniform Rj-dimensional vector for P(xj|bot) (where Rj is the feature cardinality along dimension j). For the clean distribution we choose an Rj-dimensional Zipf vector with exponential factor of 2. ... Figure 1 shows our comparison ROC curves for increasing values of P(bot) when the clean and abuse data are chosen from Zipf and random distributions as above. Each set involved 20 million samples, d = 4 categorical features, each of which had cardinality Rj = 20. |
| Researcher Affiliation | Industry | Cormac Herley Microsoft Research Redmond, WA |
| Pseudocode | Yes | For d features, with cardinalities R[j] this might be done in Python as follows: import numpy for j in range(d): P[j] = numpy.random.zipf(2, R[j]) Q[j] = numpy.random.rand(R[j]) P[j] /= P[j].sum() Q[j] /= Q[j].sum() |
| Open Source Code | No | The paper provides Python code snippets for data generation within the evaluation section, but it does not contain an explicit statement about releasing the source code for the full methodology described in the paper, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper explicitly states, "We evaluate the algorithm on simulated data." It describes how this data was generated but does not provide access information (link, DOI, etc.) for a publicly available or open dataset, nor does it use a well-known public dataset. |
| Dataset Splits | No | The paper mentions generating "20 million samples" for evaluation and that "The NB algorithm was trained on a labelled training set". However, it does not provide specific details on how this data was split into training, validation, or test sets for reproducibility (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used to conduct the simulations or evaluations. |
| Software Dependencies | No | The paper includes Python code snippets that use 'numpy', but it does not specify version numbers for Python or any libraries used. |
| Experiment Setup | Yes | We showed in Section 4.2 that the ROC curve produced by our algorithm is unaffected by P(x |bot), P(x |bot) and K so long as K > 1. Thus, we arbitrarily chose P(x |bot) = 0.2, P(x |bot) = 0.1 and K = 2; ... Each set involved 20 million samples, d = 4 categorical features, each of which had cardinality Rj = 20. |