Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Volterra Neural Networks (VNNs)
Authors: Siddharth Roheda, Hamid Krim, Bo Jiang
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed approach is evaluated on UCF-101 and HMDB-51 datasets for action recognition, and is shown to outperform state of the art CNN approaches. 6. Experiments and Results |
| Researcher Affiliation | Academia | Siddharth Roheda EMAIL Electrical and Computer Engineering Department North Carolina State University Raleigh, NC 27606, USA Hamid Krim EMAIL Electrical and Computer Engineering Department North Carolina State University Raleigh, NC 27606, USA Bo Jiang EMAIL Electrical and Computer Engineering Department North Carolina State University Raleigh, NC 27606, USA |
| Pseudocode | No | The paper describes algorithms and models through mathematical equations and textual descriptions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks with structured steps. |
| Open Source Code | Yes | The code-base for our paper is available on github ( https://github.com/sidroheda/Volterra-Neural-Networks). |
| Open Datasets | Yes | We proceed to evaluate the performance of this approach on three action recognition datasets, namely, Kinetics-400 (Carreira and Zisserman, 2017), UCF-101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011). We design the experiment using the CIFAR10 dataset of 60, 000 32 32 color images of objects from 10 classes by allotting 50,000 images for training and 10,000 images for validation. |
| Dataset Splits | Yes | We design the experiment using the CIFAR10 dataset of 60, 000 32 32 color images of objects from 10 classes by allotting 50,000 images for training and 10,000 images for validation. |
| Hardware Specification | No | The paper describes the experimental results and comparisons but does not explicitly mention the specific hardware (e.g., GPU, CPU models, or cloud computing instances) used for running the experiments. |
| Software Dependencies | No | The paper mentions using "Tensorflow (Abadi et al., 2016)" for implementation but does not specify a version number for Tensorflow or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | No | The paper discusses the architecture, loss functions (cross-entropy loss, weight decay), and structural parameters of the Volterra filters (e.g., "Lz = 2 and p1z, p2z {0, 1, 2}"). However, it does not provide specific training hyperparameters such as learning rate, batch size, number of epochs, or the optimizer used. |