Latent Smooth Skeleton Embedding
Authors: Li Wang, Qi Mao, Ivor Tsang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of the proposed methods on various datasets by comparing them with existing methods. |
| Researcher Affiliation | Collaboration | Li Wang Department of Mathematics, Statistics and Computer Science University of Illinois at Chicago liwang8@uic.edu Qi Mao HERE Company qimao.here@gmail.com Ivor W. Tsang Centre for Artificial Intelligence University of Technology Sydney Ivor.Tsang@uts.edu.au |
| Pseudocode | No | The paper describes mathematical formulations and algorithmic steps but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide any specific link or explicit statement about the availability of its source code. |
| Open Datasets | Yes | Two datasets were used in this experiment. One is teapot data (Weinberger, Sha, and Saul 2004) which consists of 400 color images of a teapot viewed from different angles in the plane... The other is breast cancer data, which contains the expression levels of over 25, 000 gene transcripts obtained from 144 normal breast tissue samples and 1, 989 tumor tissue samples (Mao et al. 2015). |
| Dataset Splits | No | The paper lists datasets used but does not provide specific details on how these datasets were split into training, validation, or test sets (e.g., percentages, exact counts, or specific splitting methodology). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models, memory specifications, or cloud computing instances used for running the experiments. |
| Software Dependencies | No | The paper mentions 'drtoolbox', 'MEU', 'SMCE', and 'Python' but does not provide specific version numbers for any software components or libraries. |
| Experiment Setup | Yes | For computational consideration, and the number of landmarks was set to 40. As the ℓ2 regularized model is a generalized version of PSL in the case of C approaching infinity, we set C = 103. A very broad, spherical Gaussian density with γ = 10 5 is used as a surrogate for the uninformative prior. For PSL-ℓ1, β controls the sparsity of the similarity matrix, so our model does not have a preset number of nearest neighbors. We set β = 103 to promote the sparsity of W. |