Landmark Ordinal Embedding
Authors: Nikhil Ghosh, Yuxin Chen, Yisong Yue
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate these characterizations empirically on both synthetic and real datasets. We empirically validate these characterizations on both synthetic and real triplet comparison datasets, and demonstrate dramatically improved computational efficiency over state-of-the-art baselines. 6 Experiments |
| Researcher Affiliation | Academia | Nikhil Ghosh UC Berkeley nikhil_ghosh@berkeley.edu Yuxin Chen UChicago chenyuxin@uchicago.edu Yisong Yue Caltech yyue@caltech.edu |
| Pseudocode | Yes | Algorithm 1 Landmark Ordinal Embedding (LOE) |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the described methodology, nor does it provide a direct link to a source-code repository or mention code in supplementary materials. |
| Open Datasets | Yes | We validate these characterizations empirically on both synthetic and real datasets. To evaluate our approach on less synthetic data, we followed the experiment conducted in [14] on the MNIST data set. To evaluate our method on a real data set and qualitatively assess embeddings, we used the food relative similarity dataset from [23] |
| Dataset Splits | No | The paper refers to using 'training' or 'test' data but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology for training, validation, and testing sets). |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, that would be needed to replicate the experiment. |
| Experiment Setup | Yes | The points of the latent embedding were generated by a normal distribution: xi N(0, 1/2d Id) for 1 i n. Triplet comparisons were made by a noisy BTL oracle. The total number of triplet queries m for embedding n items was set to be cn log n for various values of c. generated 200n log n triplets comparisons drawn uniformly at random, based on the Euclidean distances between the digits, with each comparison being incorrect independently with probability ep = 0.15. We then generated an ordinal embedding with d = 5 and computed a k-means clustering of the embedding. We set the number of replicates in k-means to 5 and the maximum number of iterations to 100. For LOE-STE we set ϵ = 0.5. |