Unsupervised Order Learning
Authors: Seon-Ho Lee, Nyeong-Ho Shin, Chang-Su Kim
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various orderable datasets demonstrate that UOL provides reliable ordered clustering results and decent rank estimation performances with no supervision. (Abstract) and This section provides various experimental results. (Section 4) |
| Researcher Affiliation | Academia | Seon-Ho Lee, Nyeong-Ho Shin & Chang-Su Kim School of Electrical Engineering, Korea University Seoul 02841, Korea {seonholee,nhshin}@mcl.korea.ac.kr, changsukim@korea.ac.kr |
| Pseudocode | Yes | Algorithm 1 Unsupervised Order Learning (UOL) (Page 4) |
| Open Source Code | Yes | The source codes are available at https://github.com/seon92/UOL. (Abstract) |
| Open Datasets | Yes | MORPH II (Ricanek & Tesafaye, 2006)... CLAP2015 (Escalera et al., 2015)... DR (Dugas et al., 2015)... Retina MNIST (Yang et al., 2021)... FER+ (Barsoum et al., 2016) (Section 4.2) |
| Dataset Splits | No | Setting A 5,492 images of the Caucasian race are selected and then randomly divided into two disjoint subsets: 80% for training and 20% for testing. Setting B 21,000 images... They are split into three disjoint subsets S1, S2, and S3. We use S2 for training and S1 + S3 for testing. (Appendix C.1) - This provides train/test, but no explicit validation split. |
| Hardware Specification | Yes | We do all experiments using Py Torch (Paszke et al., 2019) and an NVIDIA Ge Force RTX 3090 GPU. (Appendix C.1) |
| Software Dependencies | No | We do all experiments using Py Torch (Paszke et al., 2019) and an NVIDIA Ge Force RTX 3090 GPU. (Appendix C.1) - While PyTorch is mentioned, its specific version number is not given. |
| Experiment Setup | Yes | We initialize the encoder h with VGG16 pre-trained on ILSVRC2012 (Deng et al., 2009). We use the Adam optimizer (Kingma & Ba, 2015) with a batch size of 32 and a weight decay of 5 10 4. We set the learning rate to 10 4. For data augmentation, we do random horizontal flips and random crops. (Section 4.1) |