Efficient Target Propagation by Deriving Analytical Solution

Authors: Yanhao Bao, Tatsukichi Shibuya, Ikuro Sato, Rei Kawakami, Nakamasa Inoue

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through experiments, we have validated the effectiveness of this approach. Using the CIFAR-10 dataset, our method showcases accuracy levels comparable to state-of-the-art TP methods.
Researcher Affiliation Collaboration 1 Tokyo Institute of Technology 2 Denso IT Laboratory
Pseudocode Yes Algorithm 1: Local Difference Reconstruction Loss (LDRL) ... Algorithm 2: Analytical Feedback Function
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes Using the CIFAR-10 dataset, our method showcases accuracy levels comparable to state-of-the-art TP methods. ... We conduct experiments with Le Net (Le Cun et al. 1989), Simplified-VGG (a simplified version of VGGNet (Simonyan and Zisserman 2015)), MLPMixer (Tolstikhin et al. 2021) on MNIST, Fashion-MNIST and CIFAR-10 datasets.
Dataset Splits No The paper mentions using MNIST, Fashion-MNIST, and CIFAR-10 datasets, and presents 'Train' and 'Test' accuracies in tables, but does not provide specific details on the training, validation, and test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions 'PyTorch' in the context of matrix inversion, but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup No The paper discusses the network architectures (Le Net, Simplified-VGG, MLPMixer) and evaluation datasets, but does not provide specific details on experimental setup parameters such as learning rates, batch sizes, number of epochs, or optimizer configurations.