Learning to Discover Sparse Graphical Models
Authors: Eugene Belilovsky, Kyle Kastner, Gael Varoquaux, Matthew B. Blaschko
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental evaluations focus on the challenging high dimensional settings in which p > n and consider both synthetic data and real data from genetics and neuroimaging. |
| Researcher Affiliation | Academia | 1KU Leuven 2INRIA 3University of Paris-Saclay 4University of Montreal. |
| Pseudocode | Yes | Algorithm 1 Training a GGM edge estimator |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the methodology. |
| Open Datasets | Yes | We use the ABIDE dataset (Di Martino et al, 2014), a large scale resting state f MRI dataset. |
| Dataset Splits | Yes | Each network is trained continously with new samples generated until the validation error saturates. |
| Hardware Specification | No | We compute the average execution time of our method compared to Graph Lasso and BDGraph on a CPU in Table 3. |
| Software Dependencies | No | No specific version numbers are provided for software components like scikit-learn or the R-packages, only names and citations. |
| Experiment Setup | Yes | We train networks taking in 39, 50, and 500 node graphs. ... In all cases we have 50 feature maps of 3 3 kernels. The 39 and 50 node network with 6 convolutional layers and dk = k + 1. For the 500 node network with 8 convolutional layers and dk = 2k+1. We use Re LU activations. The last layer has 1 1 convolution and a sigmoid outputing a value of 0 to 1 for each edge. ... The networks are optimized using ADAM (Kingma & Ba, 2015) coupled with cross-entropy loss as the objective function (cf. Sec. 2.1). We use batch normalization at each layer. |