Label Distribution Learning Forests
Authors: Wei Shen, KAI ZHAO, Yilu Guo, Alan L. Yuille
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The effectiveness of the proposed LDLFs is verified on several LDL tasks and a computer vision application, showing significant improvements to the state-of-the-art LDL methods. We verify the effectiveness of our model on several LDL tasks, such as crowd opinion prediction on movies and disease prediction based on human genes, as well as one computer vision application, i.e., facial age estimation, showing significant improvements to the state-of-the-art LDL methods. |
| Researcher Affiliation | Collaboration | 1 Key Laboratory of Specialty Fiber Optics and Optical Access Networks, Shanghai Institute for Advanced Communication and Data Science, School of Communication and Information Engineering, Shanghai University 2 Department of Computer Science, Johns Hopkins University {shenwei1231,zhaok1206,gyl.luan0,alan.l.yuille}@gmail.com |
| Pseudocode | Yes | Algorithm 1 The training procedure of a LDLF. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We used 3 popular LDL datasets in [6], Movie, Human Gene and Natural Scene1. We download these datasets from http://cse.seu.edu.cn/people/xgeng/LDL/index.htm. We conduct facial age estimation experiments on Morph [24], which contains more than 50,000 facial images from about 13,000 people of different races. |
| Dataset Splits | Yes | Following [7, 27], we use 6 measures to evaluate the performances of LDL methods, which compute the average similarity/distance between the predicted rating distributions and the real rating distributions, including 4 distance measures (K-L, Euclidean, Sφrensen, Squared χ2) and two similarity measures (Fidelity, Intersection). In all case, following [27, 6], we split each dataset into 10 fixed folds and do standard ten-fold cross validation, which represents the result by mean standard deviation and matters less how training and testing data get divided. Following [5], we do standard 10 ten-fold cross validation and the results are summarized in Table. 2 |
| Hardware Specification | No | The paper mentions implementing LDLFs based on Caffe and evaluating performance but does not specify any hardware details like GPU or CPU models used for training or testing. |
| Software Dependencies | No | Our realization of LDLFs is based on Caffe [18]. It is modular and implemented as a standard neural network layer. The paper mentions Caffe but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The default settings for the parameters of our forests are: tree number (5), tree depth (7), output unit number of the feature learning function (64), iteration times to update leaf node predictions (20), the number of mini-batches to update leaf node predictions (100), maximum iteration (25000). |