Cardinality-Aware Set Prediction and Top-$k$ Classification
Authors: Corinna Cortes, Anqi Mao, Christopher Mohri, Mehryar Mohri, Yutao Zhong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report the results of extensive experiments on CIFAR-10, CIFAR-100, Image Net, and SVHN datasets demonstrating the effectiveness and benefits of our cardinality-aware algorithms. |
| Researcher Affiliation | Collaboration | Corinna Cortes Google Research New York, NY 10011 corinna@google.com Anqi Mao Courant Institute New York, NY 10012 aqmao@cims.nyu.edu Christopher Mohri Stanford University Stanford, CA 94305 xmohri@stanford.edu Mehryar Mohri Google Research & CIMS New York, NY 10011 mohri@google.com Yutao Zhong Courant Institute New York, NY 10012 yutao@cims.nyu.edu |
| Pseudocode | No | The paper describes algorithms in prose (e.g., in Section 3 'Cardinality-aware algorithms') but does not present them in structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We report empirical results for our cardinality-aware algorithm and show that it consistently outperforms top-k classifiers on benchmark datasets CIFAR-10, CIFAR-100 [Krizhevsky, 2009], SVHN [Netzer et al., 2011] and Image Net [Deng et al., 2009]. |
| Dataset Splits | No | The paper mentions 'training time' and 'test set' but does not provide specific training/validation/test split percentages or sample counts for the datasets used in its main experiments. |
| Hardware Specification | Yes | For each model training, we use an Nvidia A100 GPU. |
| Software Dependencies | No | The paper mentions the use of 'Adam optimizer' but does not specify any software libraries or frameworks with version numbers (e.g., 'PyTorch 1.9' or 'TensorFlow 2.x') that are necessary for replication. |
| Experiment Setup | Yes | Both the classifier h and the cardinality selector r were trained using the Adam optimizer [Kingma and Ba, 2014], with a learning rate of 1 10 3, a batch size of 128, and a weight decay of 1 10 5. |