Recurrent Bayesian Classifier Chains for Exact Multi-Label Classification
Authors: Walter Gerych, Tom Hartvigsen, Luke Buquicchio, Emmanuel Agu, Elke A. Rundensteiner
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our RBCC method on a variety of real-world multi-label datasets, where we routinely outperform the state of the art methods for exact multi-label classification. |
| Researcher Affiliation | Academia | Walter Gerych, Thomas Hartvigsen, Luke Buquicchio, Emmanuel Agu, Elke Rundensteiner Worcester Polytechnic Institute Worcester, MA {wgerych, twhartvigsen, ljbuquicchio, emmanuel, rundenst}@wpi.edu |
| Pseudocode | No | The paper describes the methodology in prose but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for RBCC is available at https://github.com/waltergerych/RBCC. |
| Open Datasets | Yes | We evaluate our method on three commonly used benchmark multi-label datasets [9, 2, 21]; for all of these we use the provided train/test partitions and are not modified. |
| Dataset Splits | No | The paper states 'for all of these we use the provided train/test partitions', but does not explicitly mention a 'validation' split or provide details on how validation data was partitioned. |
| Hardware Specification | Yes | Experiments were performed on a computing cluster, using a Intel(R) Xeon(R) Platinum 263 8160 CPU @ 2.10GHz CPU, an NVIDIA Tesla V100 SXM2 GPU, and 128 GB of RAM. |
| Software Dependencies | No | The paper states 'Full details, including our specific architectural choices, are available in the Reproducibility Appendix.' but does not provide specific software names with version numbers in the main text. |
| Experiment Setup | No | The paper mentions architectural choices are consistent across models and that 'Full details, including our specific architectural choices, are available in the Reproducibility Appendix', but it does not provide concrete hyperparameter values or detailed training configurations in the main text. |