Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Multiresolution Matrix Factorization
Authors: Risi Kondor, Nedelina Teneva, Vikas Garg
ICML 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the following sections we discuss the relationship of MMF to classical multiresolution analysis (Section 3), propose algorithms for computing MMFs (Section 4), take the ο¬rst steps to analyze their theoretical properties (Section 5) and provide some experiments (Section 6). |
| Researcher Affiliation | Academia | Risi Kondor EMAIL Nedelina Teneva EMAIL Department of Computer Science, Department of Statistics, The University of Chicago Vikas K. Garg EMAIL Toyota Technological Institute at Chicago |
| Pseudocode | Yes | Algorithm 1 GREEDYJACOBI: computing the Jacobi MMF of A with dβ= n β. and Algorithm 2 GREEDYPARALLEL: computing the binary parallel MMF of A with dβ= n2 β . |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository for its methodology. |
| Open Datasets | Yes | The ο¬rst dataset is the well-known Zachary s Karate Club (Zachary, 1977) social network (N = 34, E = 78) for which we set A to be the heat kernel. The second one is constructed using simulated data from the family pedigrees in (Crossett et al., 2013)... We use several large datasets: GR (ar Xiv General Relativity collaboration graph, N = 5242) (Leskovec et al., 2007), Dexter (bag of words, N = 2000) (Asuncion & Newman, 2012), and HEP (ar Xiv High Energy Physics collaboration graph, N = 9877, see Supplement). |
| Dataset Splits | No | The paper mentions using several datasets for experiments but does not provide specific details on how these datasets were split into training, validation, or test sets (e.g., percentages, sample counts, or cross-validation setup). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper mentions the "GCTA software of Yang et al. (2011)" but does not provide a specific version number. No other software dependencies with version numbers are listed. |
| Experiment Setup | No | The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes), optimizer settings, or training schedules. |