Convolutional Memory Blocks for Depth Data Representation Learning
Authors: Keze Wang, Liang Lin, Chuangjie Ren, Wei Zhang, Wenxiu Sun
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive evaluations on three public benchmarks demonstrate significant superiority of our framework over all the compared methods. More importantly, thanks to the enhanced learning efficiency, our framework can still achieve satisfying results using much less training data. |
| Researcher Affiliation | Collaboration | 1 School of Data and Computer Science, Sun Yat-sen University, China 2 The Hong Kong Polytechnic University 3 Sensetime Group Limited |
| Pseudocode | No | The paper describes its architecture and formulations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the methodology is openly available. |
| Open Datasets | Yes | We have evaluated the estimation performance of our framework on the newly created Kinect2 Human Pose Dataset (K2HPD) [Wang et al., 2016]... We have also used the Invariant-Top View Dataset (ITOP) dataset [Haque et al., 2016]... Moreover, we have conducted the experiment on the hand-depth image dataset [Xu and Cheng, 2013] named ASTAR... |
| Dataset Splits | No | The paper states: "For a fair comparison on these benchmarks, we follow the same training and testing setting as their officially defined." This implies that splits are externally defined, but the paper itself does not provide specific percentages or counts for training/validation/test splits. |
| Hardware Specification | Yes | All our experiments are carried out on a desktop with Intel 3.4 GHz CPU and NVIDIA GTX-980Ti GPU. |
| Software Dependencies | No | The paper mentions using the "Adam optimizer" but does not specify version numbers for any software libraries, frameworks, or other ancillary software components. |
| Experiment Setup | Yes | As for the training process, we train our model from scratch by Adam optimizer [Kendall and Cipolla, 2016] with the batch size of 16 and the initial learning rate of 0.00025, β1=0.9, β2=0.999. An exponential learning rate decay is applied with a decay rate of 0.95 every 1000 training iterations. |