CoFrNets: Interpretable Neural Architecture Inspired by Continued Fractions

Authors: Isha Puri, Amit Dhurandhar, Tejaswini Pedapati, Karthikeyan Shanmugam, Dennis Wei, Kush R. Varshney

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On various regression and time-series forecasting tasks, we show that CoFRNets provide competitive performance when compared to other interpretable models and deep neural networks.
Researcher Affiliation Academia Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences
Pseudocode No The paper describes the architecture and its components using mathematical formulations and textual descriptions but does not include structured pseudocode or algorithm blocks.
Open Source Code No The code and data for CoFRNets will be made publicly available upon acceptance.
Open Datasets Yes We use a total of eight benchmark datasets for regression and time-series forecasting, including six UCI regression datasets (Concrete, Energy, Power Plant, Protein, Yacht, Wine) and two time-series datasets (Pollution, Electricity).
Dataset Splits Yes For all datasets, we train the models using 80% of the data for training, 10% for validation, and 10% for testing.
Hardware Specification No The paper mentions software details like PyTorch and Adam optimizer but does not provide any specific hardware details such as GPU/CPU models or other computing specifications used for experiments.
Software Dependencies No The paper states 'CoFRNets are implemented in PyTorch using the Adam optimizer' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes For all datasets, we train the models using 80% of the data for training, 10% for validation, and 10% for testing. The models are trained for 200 epochs with a batch size of 64. The learning rate is initialized to 0.001 and decayed by a factor of 0.1 every 50 epochs. We use a weight decay of 0.0001 and a dropout rate of 0.1.