Experimental Design for Cost-Aware Learning of Causal Graphs
Authors: Erik Lindgren, Murat Kocaoglu, Alexandros G. Dimakis, Sriram Vishwanath
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 7 Experiments We generate chordal graphs following the procedure of [31], however we modify the sampling algorithm so that we can control the maximum degree. In our first experiment we compare the greedy algorithm to two other algorithms. ... We compare the cost of the different algorithms when we (a) adjust the number of vertices while maintaining the average degree and (b) adjust the average degree while maintaining the number of vertices. We see that the greedy coloring algorithm performs almost optimally. ... Finally, we observe the empirical running time of the greedy algorithm. |
| Researcher Affiliation | Collaboration | Erik M. Lindgren University of Texas at Austin erikml@utexas.edu Murat Kocaoglu MIT-IBM Watson AI Lab murat@ibm.com Alexandros G. Dimakis University of Texas at Austin dimakis@austin.utexas.edu Sriram Vishwanath University of Texas at Austin sriram@ece.utexas.edu |
| Pseudocode | Yes | Algorithm 1 Greedy Coloring Algorithm with Quantization; Algorithm 2 Algorithm for Min Size and Unweighted Min Cost k-Sparse Intervention Design; Algorithm 3 Algorithm for Weighted Min Cost k-Sparse Intervention Design |
| Open Source Code | No | The paper describes algorithms and experiments but does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | We generate chordal graphs following the procedure of [31], however we modify the sampling algorithm so that we can control the maximum degree. ... The node weights are generated by the heavy-tailed Pareto distribution with scale parameter 2.0. |
| Dataset Splits | No | The paper describes graph generation and algorithm comparison but does not specify training, validation, or test dataset splits in the typical machine learning sense. |
| Hardware Specification | No | The paper mentions running times and the use of the Gurobi solver, stating 'The greedy algorithm terminates in 5 seconds. In contrast, the integer programming solution takes 128 seconds using the Gurobi solver [8].' However, it does not specify any hardware components like CPU or GPU models, or cloud instance types. |
| Software Dependencies | Yes | The integer programming solution takes 128 seconds using the Gurobi solver [8]. [8] Gurobi Optimization, LLC. Gurobi optimizer reference manual, 2018. |
| Experiment Setup | Yes | First we order the vertices {v1, v2, . . . , vn}. For vertex vi we choose a vertex from {vi b, vi b+1, . . . , vi 1} uniformly at random and add it to the neighborhood of vi. We then go through the vertices {vi b, vi b+1, . . . , vi 1} and add them to the neighborhood of vi with probability d/b. ... The average degree stays close to 10 for all values of the number of vertices. ... The number of vertices are fixed at 500. ... The node weights are generated by the heavy-tailed Pareto distribution with scale parameter 2.0. The number of interventions m is fixed to 5. ... We restrict all interventions to be of size 10. |