Ultra Fast Medoid Identification via Correlated Sequential Halving
Authors: Tavor Baharav, David Tse
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Four to five orders of magnitude gains over exact computation are obtained on real data, in terms of both number of distance computations needed and wall clock time. Theoretical results are obtained to quantify such gains in terms of data parameters. Our code is publicly available online at https://github. com/Tavor B/Correlated-Sequential-Halving. |
| Researcher Affiliation | Academia | Tavor Z. Baharav Department of Electrical Engineering Stanford University Stanford, CA 94305 tavorb@stanford.edu David Tse Department of Electrical Engineering Stanford University Stanford, CA 94305 dntse@stanford.edu |
| Pseudocode | Yes | Algorithm 1 Correlated Sequential Halving |
| Open Source Code | Yes | Our code is publicly available online at https://github. com/Tavor B/Correlated-Sequential-Halving. |
| Open Datasets | Yes | The first dataset used was a single cell RNA-Seq one, which contains the gene expressions corresponding to each cell in a tissue sample... We use the 10x Genomics dataset consisting of 27,998 gene-expressions over 1.3 million neuron cells from the cortex, hippocampus, and subventricular zone of a mouse brain [4]. Another dataset we used was the famous Netflix-prize dataset [8]... The final dataset we used was the zeros from the commonly used MNIST dataset [11]. |
| Dataset Splits | No | The paper does not specify distinct training, validation, and test splits or cross-validation methods. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values or training configurations. |