Data-Driven Approximations to NP-Hard Problems
Authors: Anton Milan, S. Rezatofighi, Ravi Garg, Anthony Dick, Ian Reid
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the functionality of our approach on three exemplar applications: marginal distributions of a joint matching space, feature point matching and the travelling salesman problem. We show encouraging results on synthetic and real data in all three cases. |
| Researcher Affiliation | Academia | Anton Milan, S. Hamid Rezatofighi, Ravi Garg, Anthony Dick, Ian Reid School of Computer Science, The University of Adelaide, Australia {firstname.lastname}@adelaide.edu.au |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. Figure 3 is a diagram of the LSTM model. |
| Open Source Code | No | The paper mentions using 'a publicly available implementation3 of Pointer Networks' with a GitHub link (https://github.com/vshallc/Ptr Nets) for the TSP experiments. However, this is code for a third-party baseline they utilize, not their own novel methodology (e.g., the objective-based learning scheme or the LSTM for bipartite matching) that they claim to provide. |
| Open Datasets | Yes | For this experiment, we employ the public car and motorbikes dataset of Leordeanu et al. (2011), consisting of 30 pairs of car images and 20 pairs of motorbikes images with manually annotated point correspondences. We use the map of Qatar (Lelonek 2013) with 194 different locations as our playing field for learning to solve the TSP with 20 nodes. |
| Dataset Splits | Yes | To generate the training and validation data, we randomly sample the points from car images for each training instance and obtain 100 approximate solutions using the Integer Projected Fixed Point solver (IPFP) (Leordeanu, Sukthankar, and Hebert 2011). ... We use 50 000 training examples and a batch size of 10. The learning rate is set to 0.001 and is decreased by 10% every 1000 iterations. ... We randomly sample only 10K subsets from the large pool... We train the Pointer-Net on 10K training examples with the standard log-likelihood loss, a batch size of 1, and learning rate of 1 using stochastic gradient decent. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory amounts used for running the experiments. It does not mention cloud or cluster resources with specific specifications. |
| Software Dependencies | No | The paper mentions 'Gurobi s QBP solver' and 'a publicly available implementation of Pointer Networks' but does not specify their version numbers. It also mentions Theano implicitly (as it was often used with Pointer Nets), but no specific version. It lacks concrete version numbers for software dependencies needed for replication. |
| Experiment Setup | Yes | The network s size, i.e. the dimension of the hidden state vector h is 128 and there is one layer with a dropout probability of 0.1. We use 50 000 training examples and a batch size of 10. The learning rate is set to 0.001 and is decreased by 10% every 1000 iterations. ... for this task, we use a 2-layered RNN of size 64. ... We train the Pointer-Net on 10K training examples with the standard log-likelihood loss, a batch size of 1, and learning rate of 1 using stochastic gradient decent. |