Divide and Conquer Networks

Authors: Alex Nowak, David Folqué, Joan Bruna

ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the flexibility and efficiency of the Divideand-Conquer Network on several combinatorial and geometric tasks: convex hull, clustering, knapsack and euclidean TSP.
Researcher Affiliation Academia Alex Nowak Courant Institute of Mathematical Sciences Center for Data Science New York University New York, NY 10012, USA
Pseudocode No The paper describes algorithms and procedures using text and mathematical equations but does not include any clearly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes Publicly available code to reproduce all results are at https://github.com/alexnowakvila/DiCoNet
Open Datasets Yes We test the model in two different datasets. The first one, which we call "Gaussian", is constructed by sampling k points in the unit square of dimension d, then sampling n/k points from gaussians of variance 10^-3 centered at each of the k points. The second one is constructed by picking 3x3x3 random patches of the RGB images from the CIFAR-10 dataset.
Dataset Splits No The paper mentions "training dataset" and "test dataset" for experiments but does not explicitly describe a separate "validation set" or its split percentages/counts.
Hardware Specification No The paper does not specify any particular hardware components such as GPU models, CPU models, or specific cloud computing instances used for experiments.
Software Dependencies No The paper mentions optimizers like RMSProp and Adam, and a GRU for the merge block, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes The split parameters are updated with the RMSProp algorithm with initial learning rate of 0.01 and the merge parameters with Adam with initial learning rate of 0.001. Learning rates are updated as lr/k where k is the epoch number. We use a batch size of 128. The split block has 5 layers with 15 hidden units. The merge block is a GRU with 512 hidden units.