Kissing to Find a Match: Efficient Low-Rank Permutation Representation

Authors: Hannah Dröge, Zorah Lähner, Yuval Bahat, Onofre Martorell Nadal, Felix Heide, Michael Moeller

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the applicability and merits of the proposed approach through a series of experiments on a range of problems that involve predicting permutation matrices, from linear and quadratic assignment to shape matching problems. 4 Experiments The following experiments validate our efficient permutation estimation method for different applications, and they confirm the ability to scale to very large problem sizes.
Researcher Affiliation Academia Hannah Dröge University of Siegen, Zorah Lähner University of Siegen, Yuval Bahat Princeton University, Onofre Martorell University of Balearic Islands, Felix Heide Princeton University, Michael Möller University of Siegen
Pseudocode No The paper describes methods textually but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its own source code or a direct link to a code repository for the described methodology. It mentions a repository for a baseline method but not for their own implementation.
Open Datasets Yes We use the FAUST registrations [3] with the original 6890 vertices, a downsampled version to 502 vertices for those experiments and C generated by the ground-truth correspondence. We show results on the QAPLIB [6] library of quadratic assignment problems of real-world applications. train the networks over 1600 epochs on 10000 shapes of the SURREAL dataset [45]. using data from the TOSCA dataset [5].
Dataset Splits No The paper mentions training on datasets but does not explicitly provide details about training/validation/test dataset splits (e.g., percentages, sample counts, or specific methods for creating these splits for reproducibility).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory, or processing units) used to run the experiments.
Software Dependencies No The paper mentions using 'PyTorch Adam optimizer' and libraries like 'sklearn.linear_sum_assignment' and 'python POT package', but it does not specify version numbers for these software components, which is required for reproducible ancillary software details.
Experiment Setup Yes We solve for the permutation by performing 20000 minimizing steps with a learning rate set to 0.01 over the negative log-likelihood loss. In these experiments, we again found that each point was paired with its corresponding nearest neighbor. Also, we could reduce the memory consumption, as shown in Fig. 2. We ran experiments with similar settings as above, wherein we gradually increased the value of the temperature parameter α linearly during optimization from α = 5 10 5 to α = 1000. We ran experiments with n = 100, m = 30, α = 20 and used a greedy heuristic to generate valid. with β being iteratively increased from A 2 to A 2 and with μ(P(V, W)) being the same permutation constraint regularizer as in (13). We propose to replace the calculation of the permutation matrix based on the output of the first network Nθ by s αV W T , with α = 40. The network is trained on the modified loss function. Similar to Marin et al. , we train the networks over 1600 epochs on 10000 shapes of the SURREAL dataset [45] and evaluate our experiments on 100 noisy and noise-free objects of different shapes and poses of the FAUST dataset [3], that are provided by [26] in [25].