Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks

Authors: Federico Monti, Michael Bronstein, Xavier Bresson

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method on several standard datasets, showing that it outperforms state-of-the-art matrix completion techniques. [...] We present two main contributions. First, we introduce a new multi-graph CNN architecture that generalizes [10] to multiple graphs. [...] Second, we approach the matrix completion problem as learning on user and item graphs using the new deep multi-graph CNN framework. [...] We will demonstrate in Section 4 that the proposed RMGCNN and s RMGCNN architectures show themselves very well on different settings of matrix completion problems. [...] Experimental settings. We closely followed the experimental setup of [33], using five standard datasets: Synthetic dataset from [19], Movie Lens [29], Flixster [18], Douban [27], and Yahoo Music [11]. We used disjoint training and test sets and the presented results are reported on test sets in all our experiments. [...] Tables 4 and 5 summarize the performance of different methods.
Researcher Affiliation Academia Federico Monti Università della Svizzera italiana Lugano, Switzerland federico.monti@usi.ch Michael M. Bronstein Università della Svizzera italiana Lugano, Switzerland michael.bronstein@usi.ch Xavier Bresson School of Computer Science and Engineering NTU, Singapore xbresson@ntu.edu.sg
Pseudocode Yes Algorithm 1 (RMGCNN) input m n matrix X(0) containing initial values 1: for t = 0 : T do 2: Apply the Multi-Graph CNN (13) on X(t) producing an m n q output X(t). 3: for all elements (i, j) do 4: Apply RNN to q-dim x(t) ij = ( x(t) ij1, . . . , x(t) ijq) producing incremental update dx(t) ij 5: end for 6: Update X(t+1) = X(t) + d X(t) [...] Algorithm 2 (s RMGCNN) input m r factor H(0) and n r factor W(0) representing the matrix X(0) 1: for t = 0 : T do 2: Apply the Graph CNN on H(t) producing an n q output H(t). 3: for j = 1 : n do 4: Apply RNN to q-dim h(t) j = ( h(t) j1 , . . . , h(t) jq ) producing incremental update dh(t) j 5: end for 6: Update H(t+1) = H(t) + d H(t) 7: Repeat steps 2-6 for W(t+1)
Open Source Code Yes 2Code: https://github.com/fmonti/mgcnn
Open Datasets Yes Experimental settings. We closely followed the experimental setup of [33], using five standard datasets: Synthetic dataset from [19], Movie Lens [29], Flixster [18], Douban [27], and Yahoo Music [11].
Dataset Splits Yes We used disjoint training and test sets and the presented results are reported on test sets in all our experiments. [...] The number of diffusion steps T has been estimated on the Movielens validation set and used in all our experiments.
Hardware Specification No All the models were implemented in Google Tensor Flow and trained using the Adam stochastic optimization algorithm [20] with learning rate 10 3. The paper does not specify any particular hardware (GPU/CPU models, memory, etc.) used for the experiments.
Software Dependencies No All the models were implemented in Google Tensor Flow and trained using the Adam stochastic optimization algorithm [20] with learning rate 10 3. No specific version numbers for TensorFlow or other software dependencies are provided.
Experiment Setup Yes In all the experiments, we used the following settings for our RMGCNNs: Chebyshev polynomials of order p = 4, outputting k = 32-dimensional features, LSTM cells with 32 features and T = 10 diffusion steps (for both training and test). The number of diffusion steps T has been estimated on the Movielens validation set and used in all our experiments. All the models were implemented in Google Tensor Flow and trained using the Adam stochastic optimization algorithm [20] with learning rate 10 3. In factorized models, ranks r = 15 and 10 was used for the synthetic and real datasets, respectively. For all methods, hyperparameters were chosen by cross-validation.