Recurrent Transformer Networks for Semantic Correspondence
Authors: Seungryong Kim, Stephen Lin, SANG RYUL JEON, Dongbo Min, Kwanghoon Sohn
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experimental Results and Discussion, 5.2 Ablation Study, 5.3 Matching Results |
| Researcher Affiliation | Collaboration | 1Yonsei University, Seoul, South Korea, 2Microsoft Research, Beijing, China, 3Ewha Womans University, Seoul, South Korea |
| Pseudocode | No | The paper does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code or explicitly state its release. |
| Open Datasets | Yes | Performance was measured on the TSS dataset [36], PF-WILLOW dataset [7], and PF-PASCAL dataset [8]. |
| Dataset Splits | Yes | Following the split in [9, 31], we used 700 training pairs, 300 validation pairs, and 300 testing pairs. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU/CPU models or specific computer specifications used for running experiments. |
| Software Dependencies | No | The paper mentions several neural network architectures and general tools like 'Sift flow optimizer' but does not specify any software dependencies with version numbers (e.g., specific library versions, programming language versions). |
| Experiment Setup | No | The paper mentions 'RTNs w/Res Net [12] converge in 3 5 iterations' and discusses 'window size of Ni', but it does not provide concrete hyperparameter values such as learning rate, batch size, or optimizer settings, nor detailed training configurations. |