Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

3D Steerable CNNs: Learning Rotationally Equivariant Features in Volumetric Data

Authors: Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, Taco S. Cohen

NeurIPS 2018 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results confirm the effectiveness of 3D Steerable CNNs for the problem of amino acid propensity prediction and protein structure classification, both of which have inherent SE(3) symmetry.
Researcher Affiliation Collaboration Maurice Weiler* University of Amsterdam EMAIL Mario Geiger* EPFL EMAIL Max Welling University of Amsterdam, CIFAR, Qualcomm AI Research EMAIL Wouter Boomsma University of Copenhagen EMAIL Taco Cohen Qualcomm AI Research EMAIL
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Source code is available at https://github.com/mariogeiger/se3cnn.
Open Datasets Yes We constructed a new data set, based on the CATH protein structure classification database [11], version 4.2 (see http://cathdb.info/browse/tree). ... The new dataset is available at https://github.com/wouterboomsma/cath_datasets.
Dataset Splits Yes We used the first seven of the ten splits for training, the eighth for validation and the last two for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. It mentions 'current hardware' generally.
Software Dependencies No The paper mentions using the 'Adam optimizer [25]' but does not provide specific version numbers for any software components, programming languages, or libraries used for implementation.
Experiment Setup Yes We train the models for 100 epochs using the Adam optimizer [25], with an exponential learning rate decay of 0.94 per epoch starting after an initial burn-in phase of 40 epochs.