A Nearly-Linear Time Framework for Graph-Structured Sparsity
Authors: Chinmay Hegde, Piotr Indyk, Ludwig Schmidt
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complement our theoretical analysis with experiments demonstrating that our algorithms also improve on prior work in practice. Section 6 complements our theoretical results with an empirical evaluation on both synthetic and real data (a background-subtracted image, an angiogram, and an image of text). |
| Researcher Affiliation | Academia | Massachusetts Institute of Technology, Cambridge, MA 02139, USA |
| Pseudocode | Yes | Algorithm 1 PCSF-TAIL; Algorithm 2 GRAPH-COSAMP |
| Open Source Code | No | The paper states "The implementations were supplied by the authors" for comparative methods but does not provide concrete access to the source code for their own proposed methodology. |
| Open Datasets | No | The paper mentions using "synthetic and real data (a background-subtracted image, an angiogram, and an image of text)" and refers to supplementary material for a dataset description, but no concrete access information (link, DOI, specific citation with author/year for public dataset) is provided in the main paper. |
| Dataset Splits | No | No specific training/validation/test dataset splits are explicitly provided. The paper discusses observation count (n=6s) and success criteria but not data partitioning. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory, cloud instance types) used for experiments are provided in the paper. |
| Software Dependencies | No | The paper mentions other algorithms like Struct OMP, La MP, Basis Pursuit, and Co Sa MP, but it does not provide specific software dependencies (e.g., library names with version numbers) for its own implementation. |
| Experiment Setup | No | The paper provides some experimental context like the number of observations (n=6s) and success criteria, but it lacks specific hyperparameter values (e.g., learning rate, batch size, optimizer) or detailed system-level training configurations for their proposed algorithm. |