Learning Invariant Representations Of Planar Curves
Authors: Gautam Pai, Aaron Wetzler, Ron Kimmel
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In comparison with axiomatic constructions, we show that the invariants approximated by the learning architectures have better numerical qualities such as robustness to noise, resiliency to sampling, as well as the ability to adapt to occlusion and partiality. In Section 5 we provide experiments and discuss results. In Section 5 we conduct experiments to test the numerical stability and robustness of the invariant signatures. |
| Researcher Affiliation | Academia | Gautam Pai, Aaron Wetzler & Ron Kimmel Department of Computer Science Technion-Israel Institute of Technology {paigautam,twerd,ron}@cs.technion.ac.il |
| Pseudocode | No | The paper describes the network architecture and training process in text and diagrams (Figure 3) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the availability of its source code. It only mentions using the 'Torch library' for implementation. |
| Open Datasets | Yes | The contours are extracted from the shapes of the MPEG7 Database (Latecki et al. (2000)) as shown in first part of Figure 4. |
| Dataset Splits | Yes | 700 of the total were used for training and 350 each for testing and validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments, only mentioning that training was performed using the Torch library. |
| Software Dependencies | No | The paper mentions using the 'Torch library' and 'Adagrad' for training but does not provide specific version numbers for these software dependencies, which are necessary for full reproducibility. |
| Experiment Setup | Yes | We trained using Adagrad Duchi et al. (2011) at a learning rate of 5 10 4 and a batch size of 10. We set the contrastive loss hyperparameter margin µ = 1. |