On the consistency of top-k surrogate losses
Authors: Forest Yang, Sanmi Koyejo
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our contributions are primarily theoretical, and are outlined as follows: ... We employ these losses in synthetic experiments, observing aspects of their behavior which reflect our theoretical analysis. |
| Researcher Affiliation | Collaboration | Forest Yang 1 2 Sanmi Koyejo 1 3 1Google Research Accra 2University of California, Berkeley 3University of Illinois at Urbana-Champaign. |
| Pseudocode | No | The paper describes its methods mathematically and textually, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes the generation of synthetic data for experiments ('We construct training data which matches the above setting', 'randomly sample from a d dimensional Gaussian'), but does not provide access information (link, DOI, citation) to a publicly available dataset. |
| Dataset Splits | No | The paper describes generating separate training and test sets but does not explicitly mention a validation set or specific proportions/counts for a train/validation/test split. |
| Hardware Specification | Yes | A machine with an Intel Core i7 8th-gen CPU with 16GB of RAM was used. |
| Software Dependencies | No | The paper mentions using 'Pytorch' but does not specify a version number or list other software dependencies with their versions. |
| Experiment Setup | Yes | We train our neural architecture on the data using batch gradient descent, setting the loss of the last layer to be each of {ψ1, . . . , ψ5} with k = 2. ... We optimize with Adam for 500 epochs, using a learning rate of 0.1 and full batch. |