Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions

Authors: Nikolaos Karalias, Joshua Robinson, Andreas Loukas, Stefanie Jegelka

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we observe benefits of our extensions for unsupervised neural combinatorial optimization, in particular with high-dimensional representations. and We experiment with SFEs as loss functions in neural network pipelines on discrete objectives arising in combinatorial and vision tasks.
Researcher Affiliation Collaboration Nikolaos Karalias EPFL nikolaos.karalias@epfl.ch Joshua Robinson MIT CSAIL joshrob@mit.edu Andreas Loukas Prescient Design, Genentech, Roche andreas.loukas@roche.com Stefanie Jegelka MIT CSAIL stefje@csail.mit.edu
Pseudocode No The paper describes its methods through mathematical formulations and prose, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'see Appendix E for details on data, hardware, and software,' implying that code information might be there, but Appendix E is not provided, and there is no explicit statement or link to an open-source code repository within the available text.
Open Datasets Yes We use the ENZYMES, PROTEINS, IMDB, MUTAG, and COLLAB datasets from the TUDatasets benchmark (Morris et al., 2020)
Dataset Splits Yes We use the ENZYMES, PROTEINS, IMDB, MUTAG, and COLLAB datasets from the TUDatasets benchmark (Morris et al., 2020), using a 60/30/10 split for train/test/val.
Hardware Specification No The paper states, 'see Appendix E for details on data, hardware, and software.' However, Appendix E is not provided within the given text, so specific hardware details are not available.
Software Dependencies No The paper states, 'see Appendix E for details on data, hardware, and software.' However, Appendix E is not provided within the given text, so specific software dependencies with version numbers are not detailed in the main body.
Experiment Setup No The paper states, 'Finally, see Appendix F for training and hyper-parameter optimization details'. However, Appendix F is not provided within the given text, so specific experimental setup details like hyperparameters are not available in the main body.