Deep Repulsive Clustering of Ordered Data Based on Order-Identity Decomposition
Authors: Seon-Ho Lee, Chang-Su Kim
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on facial age estimation, aesthetic score regression, and historical color image classification show that the proposed algorithm can cluster ordered data effectively and also yield excellent rank estimation performance. |
| Researcher Affiliation | Academia | Seon-Ho Lee and Chang-Su Kim School of Electrical Engineering, Korea University seonholee@mcl.korea.ac.kr, changsukim@korea.ac.kr |
| Pseudocode | Yes | Algorithm 1 DRC-ORID |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | Datasets: We use two datasets. First, MORPH II (Ricanek & Tesafaye, 2006)... Second, the balanced dataset (Lim et al., 2020) is sampled from the three datasets of MORPH II, AFAD (Niu et al., 2016), and UTK (Zhang et al., 2017)... The aesthetics and attribute database (AADB) is composed of 10,000 photographs... (Kong et al., 2016)... HCI (Palermo et al., 2012) is a dataset... |
| Dataset Splits | Yes | Setting A 5,492 images of the Caucasian race are selected and then randomly divided into two non-overlapping parts: 80% for training and 20% for test. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer', 'VGG16', and 'EfficientNet B4' but does not provide specific version numbers for these or other key software dependencies. |
| Experiment Setup | Yes | We use the Adam optimizer with a learning rate of 10 4 and decrease the rate by a factor of 0.5 every 50,000 steps. ... dor and did are set to be 128 and 896, respectively. In Eq. (6), we set α to 0.1 and decrease it to 0.05 after 200 epochs. |