Meta Learning for Support Recovery in High-dimensional Precision Matrix Estimation
Authors: Qian Zhang, Yilin Zheng, Jean Honorio
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Validation Experiments We validate our theories with synthetic experiments by reporting the success rate for the recovery of the support union. We simulate Erdos-Renyi random graphs in this experiment and compare the results of our estimator in (5) with four multi-task learning methods. |
| Researcher Affiliation | Academia | 1Department of Statistics, Purdue University, West Lafayette, USA 2Department of Computer Science, Purdue University, West Lafayette, USA. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the methodology described in this paper. |
| Open Datasets | Yes | We also use our two-step meta learning method to conduct experiments with two real-world datasets, the single-cell gene expression dataset from (Kouno et al., 2013) and the cancer genome atlas dataset from http://tcga-data. nci.nih.gov/tcga/. |
| Dataset Splits | No | The paper mentions using specific numbers of samples for auxiliary tasks and novel tasks (e.g., '10 samples of each task 1 to 7' and '10 samples of task 8'), but does not specify general train/validation/test dataset splits as percentages, predefined citations, or detailed splitting methodology needed for reproduction. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., exact GPU/CPU models, processor types) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | For Figure 1, we fix the number of auxiliary tasks K = 10 and run experiments with sample size per auxiliary task n = (C log N)/K for C ranging from 5 to 200. We simulate Erdos-Renyi random graphs in this experiment and compare the results of our estimator in (5) with four multi-task learning methods. We first generate Ωby assigning an edge with probability d/(N 1) for each pair of nodes (i, j). Then for each edge (i, j), we set Ωij = Ωji to 1 with probability 0.5 and to 1 otherwise. For 1 k K and (i, j) S, Ω(k) ij is set to Ωij Xij with Xij Bernoulli(0.9). Then we add some constant to the diagonal elements of all the precision matrices to ensure that their minimum eigenvalue is at least 0.1. |