Dynamic Nonlinear Matrix Completion for Time-Varying Data Imputation

Authors: Jicong Fan6587-6596

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results show that D-NLMC outperforms the baselines in real applications.
Researcher Affiliation Academia Jicong Fan1,2 1The Chinese University of Hong Kong (Shenzhen) 2Shenzhen Research Institute of Big Data Shenzhen, China fanjicong@cuhk.edu.cn
Pseudocode Yes Algorithm 1: Fast EVD for Kt
Open Source Code No The paper does not provide any statement about releasing the source code for its described methodology, nor does it include any links to a code repository.
Open Datasets Yes We test the proposed method on the SML2010 indoor temperature dataset3 from the UCI machine learning repository. The dataset consists of 2764 samples of 24 variables such as indoor temperature, relative humidity, and lightning. (https://archive.ics.uci.edu/ml/datasets/SML2010)
Dataset Splits No The paper describes randomly removing fractions of entries or blocks for testing performance and mentions a training period of 20 columns for synthetic data, but it does not specify explicit train/validation/test dataset splits (e.g., percentages or sample counts) needed for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'L-BFGS (Liu and Nocedal 1989)' for optimization, but it does not provide specific version numbers for any software, libraries, or frameworks used in the implementation or experimentation.
Experiment Setup Yes In D-NLMC, we set w = 20, R = 15, and use Gaussian kernel with σ = µw 2 Pw i=1 Pw j=1 xi xj (similar to (Fan, Zhang, and Udell 2020)), where µ is a constant such as 1 or 3. In this case, we use µ = 1.