Taming the Wild: A Unified Analysis of Hogwild-Style Algorithms
Authors: Christopher M. De Sa, Ce Zhang, Kunle Olukotun, Christopher Ré, Christopher Ré
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experimentally that our algorithms run efficiently for a variety of problems on modern hardware. |
| Researcher Affiliation | Academia | Christopher De Sa, Ce Zhang, Kunle Olukotun, and Christopher R e decsa@stanford.edu, czhang@cs.wisc.edu, kunle@stanford.edu, chrismre@stanford.edu Departments of Electrical Engineering and Computer Science Stanford University, Stanford, CA 94309 |
| Pseudocode | No | The paper describes algorithms using equations and text but does not include any explicitly labeled pseudocode blocks or algorithm listings. |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for its methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We analyzed all four datasets reported in Dimm Witted [25] that favored HOGWILD!: Reuters and RCV1, which are text classification datasets; Forest, which arises from remote sensing; and Music, which is a music classification dataset. |
| Dataset Splits | No | The paper mentions analyzing datasets and training loss, but it does not explicitly describe any train/validation/test dataset splits or their sizes. |
| Hardware Specification | Yes | Experiments ran on a machine with two Xeon X650 CPUs, each with six hyperthreaded cores, and 24GB of RAM. |
| Software Dependencies | No | The paper mentions algorithms like SGD, HOGWILD!, and BUCKWILD! but does not provide specific version numbers for any software libraries, frameworks, or dependencies used in the experiments. |
| Experiment Setup | Yes | We ran SGD with step size α = 0.0001; however, results are similar across a range of step sizes. |