Techniques for Symbol Grounding with SATNet

Authors: Sever Topan, David Rolnick, Xujie Si

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our method allows SATNet to attain full accuracy even with a harder problem setup that prevents any label leakage. We additionally introduce a proofreading method that further improves the performance of SATNet architectures, beating the state-of-the-art on Visual Sudoku. All experiments were carried out on a Nvidia GTX1070 across 100 epochs, with each epoch taking roughly 2 minutes. Table 1: Performance of our method compared to the original SATNet architecture between grounded and ungrounded versions of the Visual Sudoku problem.
Researcher Affiliation Collaboration Sever Topan1, 2, David Rolnick1, 3, 4, and Xujie Si1, 3, 4 1Mc Gill University, 2NVIDIA, 3Mila Quebec AI Institute, 4CIFAR AI Research Chair {stopan, drolnick, xsi}@cs.mcgill.ca
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Included as Supplemental Material
Open Datasets Yes We used the Sudoku Dataset made available under an MIT License from the original SATNet work [7].
Dataset Splits No The paper mentions early stopping based on per-cell error, implying a validation step, but does not provide specific details on the dataset split for validation: 'One thing to note is that the self-grounded training step is susceptible to overfitting, and one needs to employ early stopping on the basis of per-cell error in order to learn the permutation matrix ˆP.'
Hardware Specification Yes All experiments were carried out on a Nvidia GTX1070 across 100 epochs, with each epoch taking roughly 2 minutes.
Software Dependencies No The paper mentions 'The Adam optimiser was used' but does not specify version numbers for Adam or other key software libraries like PyTorch, TensorFlow, or Python.
Experiment Setup Yes All experiments were carried out on a Nvidia GTX1070 across 100 epochs, with each epoch taking roughly 2 minutes. The Adam optimiser was used with learning rate of 2 10 3 for the SATNet layer, and 10 5 for the digit classifier [36].