Robust Asymmetric Bayesian Adaptive Matrix Factorization

Authors: Xin Guo, Boyuan Pan, Deng Cai, Xiaofei He

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare ALAMF with other state-of-the-art matrix factorization methods both on data sets ranging from synthetic and real-world application. The experimental results demonstrate the effectiveness of our proposed approach. and 5 Experiments In this section, we empirically compare the proposed ALAMF model with seven state-of-the-art methods.
Researcher Affiliation Academia Xin Guo Boyuan Pan Deng Cai Xiaofei He State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, China guoxinzju@gmail.com panby@zju.edu.cn {dengcai, xiaofeihe}@cad.zju.edu.cn
Pseudocode Yes Algorithm 1 Variational Inference for ALAMF
Open Source Code No No explicit statement or link to open-source code for the methodology was found in the paper.
Open Datasets Yes Similar to [Meng et al., 2013], we study a real application using face images captured under varying illumination. We generate some relatively large datasets and some relatively small datasets in the experiments. Firstly, a larger dataset was built by using the first and fifth subsets of Extended Yale B datasets(Georghiades, Belhumeur and Kriegman 2001;Basri and Jacobs 2003).
Dataset Splits No The paper describes data corruption and missingness levels, but does not provide explicit training, validation, or test dataset splits in terms of percentages or sample counts for model training/evaluation in the typical supervised learning sense. For example, '20% of entries in Ygt as missing data' refers to input data conditions, not a dataset split.
Hardware Specification No No specific details about the hardware used for running experiments (e.g., CPU/GPU models, memory) are provided in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) are mentioned in the paper.
Experiment Setup Yes For all the experiments we have conducted, the hyperparameters of ALAMF are fixed without further tuning: a0 = b0 = 10 4, a1 = b1 = 0.1, α = 1. For all the methods, we set the rank of the lowrank component to 8 and apply the random initialization strategy to U and V.