NCDL: A Framework for Deep Learning on non-Cartesian Lattices

Authors: Joshua Horacsek, Usman Alim

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we delineate the experiments we conducted to evaluate NCDL. First, we compare NCDL with the closest existing hexagonal convolution library [34]. Subsequently, we explore the use of non-dyadic down/up sampling within bottlenecked architectures. All experiments are conducted on an AMD Ryzen 9 3900X (3.8GHz) with 128GB of DDR4 RAM operating at 3200 MHz, accompanied by an NVIDIA RTX 3090 with 24GB of RAM.
Researcher Affiliation Academia Joshua J. Horacsek Department of Computer Science University of Calgary Calgary, Alberta j.horacsek@ncdl.ai Usman R. Alim Department of Computer Science University of Calgary Calgary, Alberta ualim@ucalgary.ca
Pseudocode No The paper contains mathematical definitions and propositions, but no structured pseudocode or algorithm blocks are present.
Open Source Code Yes Ultimately, we present a software library called Non-Cartesian Deep Learning (NCDL) which is an open source, concrete implementation of the lattice tensor container and the associated spatio-temporal operations defined over lattice tensors. ... Our implementation is available at https://www.ncdl.ai.
Open Datasets Yes We train our models using the Celeb A dataset [30]. ... We train our models on the DUTS salient object detection dataset [40]
Dataset Splits No Employing a straightforward L1 loss, we measure validation L1, L2, PSNR, SSIM [41] and LPIPs metrics [46]. ... we measure validation BCE, L1, L2 and SSIM. The paper mentions “validation” but does not provide specific details on the dataset splits (e.g., percentages or sample counts) used for training, validation, or testing.
Hardware Specification Yes All experiments are conducted on an AMD Ryzen 9 3900X (3.8GHz) with 128GB of DDR4 RAM operating at 3200 MHz, accompanied by an NVIDIA RTX 3090 with 24GB of RAM.
Software Dependencies No NCDL library is implemented on top of Py Torch [31]". The paper mentions PyTorch but does not provide specific version numbers for software dependencies needed for replication.
Experiment Setup Yes The network is trained with the Adam optimizer, default parameters, and a batch size of 8. We train for 300,000 iterations, as convergence was observed at this point, and take an average of 5 runs. ... The network is trained with the Adam optimizer, with default parameters, and a batch size of 8. We train for 120,000 iterations, as convergence was observed in the validation data at this point.