Learning to Learn Graph Topologies

Authors: Xingyue Pu, Tianyue Cao, Xiaoyun Zhang, Xiaowen Dong, Siheng Chen

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.
Researcher Affiliation Academia Xingyue Pu University of Oxford xpu@robots.ox.ac.uk Tianyue Cao Xiaoyun Zhang Shanghai Jiao Tong University {vanessa_,xiaoyun.zhang}@sjtu.edu.cn Xiaowen Dong University of Oxford xdong@robots.ox.ac.uk Siheng Chen Shanghai Jiao Tong University sihengc@sjtu.edu.cn
Pseudocode Yes Algorithm 1 PDS Algorithm 2 Unrolling Net (L2G)
Open Source Code Yes We also release the code for implementation 4. (Footnote 4: https://github.com/xpuoxford/L2G-neurips2021)
Open Datasets Yes We use the daily returns of stocks obtained from Yahoo Finance^5. (Footnote 5: https://pypi.org/project/yahoofinancials/) We collect data from the dataset^6, which contains 539 subjects with autism spectrum disorder and 573 typical controls. (Footnote 6: http://preprocessed-connectomes-project.org/abide/)
Dataset Splits No The paper mentions training and testing but does not provide specific details on a validation split (e.g., percentages or sample counts) in the main text.
Hardware Specification No The paper does not specify the exact hardware used for experiments (e.g., specific GPU/CPU models or cloud instances) in the main text.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup No The paper states "More implementation details, e.g. model configurations and training settings, can be found in Appendix." (Section 5), indicating that these specific details are not present in the main text.