Towards Maximizing the Representation Gap between In-Domain & Out-of-Distribution Examples
Authors: Jay Nandy, Wynne Hsu, Mong Li Lee
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct two sets of experiments: First, we experiment on a synthetic dataset. Next, we present a comparative study on a range of image classification tasks. 1 2 |
| Researcher Affiliation | Academia | Jay Nandy Wynne Hsu Mong Li Lee National University of Singapore {jaynandy,whsu,leeml}@comp.nus.edu.sg |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 2Code Link: https://github.com/jayjaynandy/maximize-representation-gap |
| Open Datasets | Yes | We conduct two sets of experiments... Next, we present a comparative study on a range of image classification tasks. ... We carry out experiments on CIFAR-10 and CIFAR-100 [28] and Tiny Image Net [29]... Image Net-25K is obtained by randomly selecting 25, 000 images from the Image Net dataset [30]. |
| Dataset Splits | No | The paper mentions training and test examples but does not explicitly provide details about a separate validation set or specific train/validation/test splits within the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper mentions 'λin and λout are user-defined hyper-parameters' and refers to 'Appendix B.1 for additional details on experimental setup, hyper-parameters', but does not provide concrete hyperparameter values or detailed system-level training settings within the main text. |