Universe Points Representation Learning for Partial Multi-Graph Matching
Authors: Zhakshylyk Nurlanov, Frank R. Schmidt, Florian Bernard
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed approach advances the state of the art in semantic keypoint matching problem, evaluated on Pascal VOC, CUB, and Willow datasets. Moreover, the set of controlled experiments on a synthetic graph matching dataset demonstrates the scalability of our method to graphs with large number of nodes and its robustness to high partiality. |
| Researcher Affiliation | Collaboration | Zhakshylyk Nurlanov1,2, Frank R. Schmidt1, Florian Bernard2 1 Bosch Center for Artificial Intelligence 2 University of Bonn |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | We evaluate the real-world performance on Pascal VOC and CUB-200-2011 datasets. We also estimate the transferability of the learned node feature encoder to other datasets (Pascal Willow). |
| Dataset Splits | No | The paper mentions using 'training set' and 'test set' but does not provide specific details on the dataset splits (e.g., percentages, sample counts, or explicit citations for the splits themselves) needed for reproduction. |
| Hardware Specification | No | The paper mentions 'Training of BBGM-Multi-3 on problems with more than 300 universe points does not fit into GPU memory (48 GB)'. While a memory capacity is given, no specific GPU model or type is mentioned. |
| Software Dependencies | No | The paper mentions software components like 'Spline CNN' and references 'Image Net pre-trained VGG features' but does not specify any version numbers for these or other software dependencies. |
| Experiment Setup | No | The paper describes the architecture and components used (e.g., Spline CNN blocks, MLPz) but does not provide specific experimental setup details such as hyperparameter values (learning rate, batch size, epochs, optimizer settings) or training configurations. |