Effective and Efficient Structural Inference with Reservoir Computing
Authors: Aoran Wang, Tsz Pan Tong, Jun Pang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results on various datasets including biological networks, simulated f MRI data, and physical simulations show the effectiveness and efficiency of our proposed method for structural inference, either with much fewer trajectories or with much shorter trajectories compared with previous works. |
| Researcher Affiliation | Academia | 1Faculty of Science, Technology and Medicine, University of Luxembourg, Luxembourg 2Institute for Advanced Studies, University of Luxembourg, Luxembourg. Correspondence to: Aoran Wang <aoran.wang@uni.lu>, Tsz Pan Tong <tszpan.tong@uni.lu>, Jun Pang <jun.pang@uni.lu>. |
| Pseudocode | Yes | We describe the pipeline of BO in this work in Algorithm 1, where , represents element-wise multiplication. Interestingly, Liu et al. (2022a) introduce the notion of attraction points (see Section A.3 for details) during gradient descent. The attraction of (ψ, θ) is where the gradient descent algorithm can not make improvement and is implemented with an approximation in step 1 of Algorithm 1. BOME can optimize the UL objective function from this attraction of LL optimization without being trapped to the discontinuous attraction points on the boundary of the attraction basin. In our case, after setting the value of T properly, the attraction is where the optimization of JLL stagnates, and the inferred adjacency matrix by the framework at this step is no less accurate than the one inferred by vanilla VAE-based method. Then the training process turns to optimize the upper-level objective function JUL. With the help of BOME, the optimization of RC branch can also benefit the other branch, and further promote the performance of structural inference. [...] We summarize the whole pipeline in Figure 1, and in Algorithm 2 in the appendix. |
| Open Source Code | Yes | For details about the implementation please refer to the link attached in the supplementary material. |
| Open Datasets | Yes | We test our model on the six directed synthetic biological networks (Pratapa et al., 2020), namely Linear (LI), Linear Long (LL), Cycle (CY), Bifurcating (BF), Trifurcating (TF), and Bifurcating Converging (BF-CV) networks, which are essential components leading to a variety of different trajectories that are commonly observed in differentiating and developing cells (Saelens et al., 2019). [...] We also test our model on Net Sim datasets (Smith et al., 2011) of simulated f MRI data. [...] Besides, we also select three physical simulations mentioned in (Kipf et al., 2018), namely springs, charged particles and phase-coupled oscillators (Kuramoto model). |
| Dataset Splits | Yes | We randomly divide the trajectories into training set, validation set and test set with a ratio of 8 : 2 : 2. |
| Hardware Specification | Yes | The experiments are run on one NVIDIA Tesla V100 SXM2 32G graphic card, with two Xeon Gold 6132 @ 2.6GHz CPUs. |
| Software Dependencies | Yes | We implement RCSI in Pytorch (Paszke et al., 2019) with the help of Scikit-Learn (Pedregosa et al., 2011) to calculate metrics. |
| Experiment Setup | Yes | We set the maximum epoch as 1000, and set batch size as 128 for datasets that have no more than 10 nodes, and 64 for datasets having more than 10 nodes. We use Adam optimizer (Kingma & Ba, 2015) for the training of both branches with the learning rate of 5e 4, and we reduce the learning rate to 50% if there is no loss drop in the past 100 epochs. [...] For example, we set T = 10 and η = 0.5 for BOME in this work, and set the weights in the loss functions identical as i SIDG (Wang & Pang, 2022). |