Learning Human-Compatible Representations for Case-Based Decision Support
Authors: Han Liu, Yizhou Tian, Chacha Chen, Shi Feng, Yuxin Chen, Chenhao Tan
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using both synthetic data and human subject experiments in multiple classification tasks, we demonstrate that such representation is better aligned with human perception than representation solely optimized for classification. |
| Researcher Affiliation | Academia | Han Liu, Yizhou Tian, Chacha Chen, Shi Feng, Yuxin Chen & Chenhao Tan Department of Computer Science, University of Chicago {hanliu,tianh,chacha,shif,chenyuxin,chenhao}@uchicago.edu |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are available at https://github.com/Chicago HAI/ learning-human-compatible-representations. |
| Open Datasets | Yes | butterfly vs. moth classification from Image Net (Krizhevsky et al., 2012), and (ii) pneumonia classification based on chest X-rays (Kermany et al., 2018). |
| Dataset Splits | Yes | We generate 2000 images and randomly split the dataset into training, validation, and testing sets in a 60%:20%:20% ratio. |
| Hardware Specification | Yes | We use a computing cluster at our institution. We train our models on nodes with different GPUs including Nvidia Ge Force RTX2080Ti, Nvidia Ge Force RTX3090, Nvidia Quadro RTX 8000, and Nvidia A40. |
| Software Dependencies | No | We use the Py Torch framework (Paszke et al., 2019) and the Py Torch Lightning framework (Falcon et al., 2019) for implementation. (Specific version numbers are not provided.) |
| Experiment Setup | Yes | We use the Adam optimizer (Kingma & Ba, 2014) with learning rate 1e 4. We use a training batch size of 40 for triplet prediction, and 30 for classification. |