Convolutional Imputation of Matrix Networks

Authors: Qingyun Sun, Mengyuan Yan, David Donoho, boyd

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the algorithm with a variety of applications such as MRI and Facebook user network. (Abstract) and 7. Experimental results (Section heading)
Researcher Affiliation Academia 1Department of Mathematics, Stanford University, California, USA 2Department of Electrical Engineering, Stanford University, California, USA 3Department of Statistics, Stanford University, California, USA.
Pseudocode Yes Iterative Imputation: input PΩ(A). Initialization Aest 0 = 0, t = 0. for λ1 > λ2 > . . . > λC, where λj = (λj k), k = 1, . . . , N do Aimpute = PΩ(A) + P Ω(Aest t ). ˆAimpute = UAimpute. ˆAest t+1(k) = Sλj k( ˆAimpute(k)). Aest t+1 = U 1 ˆAest t+1. t=t+1. until Aest t Aest t 1 2/ Aest t 1 2 < ϵ. Assign Aλj = Aest t . end for output The sequence of solutions Aλ1, . . . , AλC. (Section 6)
Open Source Code No The paper does not contain an explicit statement offering the source code for its methodology, nor does it provide a link to a public repository.
Open Datasets Yes We take the ego networks from the SNAP Facebook dataset (33). (Section 7) (Reference 33: Jure Leskovec and Julian J Mcauley. Learning to discover social circles in ego networks. Advances in neural information processing systems, pages 539 547, 2012.)
Dataset Splits No The paper describes sampling schemes to generate partial observations (e.g., 'sampled each frame of the MRI images with i.i.d. Bernoulli distribution p = 0.2, and 2 out of 88 frames are completely unobserved'), but it does not specify conventional train/validation/test dataset splits for model training or hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions related software like 'softimpute' (25), but it does not specify any software dependencies (e.g., libraries, frameworks) with version numbers used for its own implementation or experiments.
Experiment Setup Yes The sequence of regularization parameters is chosen such that λ1 k > λ2 k > . . . > λC k for each k. The solution for each iteration with λs is a warm start for the next iteration with λs+1. Our recommended choice is to choose λ1 k as the largest singular value for ˆAimpute(k), and decay λs at a constant speed λs+1 = cλs. (Section 6) and adding i.i.d. Gaussian noise with mean 0 and variance σ2/50 to all observed entries. (Section 7, Facebook network)